Re: [Burp-users] Resume big file transfert when network lost
Brought to you by:
grke
|
From: Graham K. <gr...@gr...> - 2016-11-16 10:11:47
|
On Wed, November 16, 2016 9:22 am, ph...@ma... wrote: > Le 14.11.2016 11:41, Graham Keeling a écrit : >> On Sun, Nov 13, 2016 at 10:23:28PM +0100, ph...@ma... wrote: >>> Hello people, >>> >>> I'm a new user of Burp and I'm testing is capability to minimise the >>> network usage. >>> >>> Is there a way to have a "resume file" capability ? >>> If i kill the client during a big file transfer, it will restart from >>> the beginning of this file even with the working_dir_recovery_method = >>> resume >>> I try protocol 1 and protocol 2 with the same issue⦠Is it normal ? >>> Did >>> I miss something ? >>> >>> Thanks for help. >>> >>> Phil >> >> Hello, >> >> There is no "resume file" capability. >> >> Protocol 1 works at file level granularity. >> >> Protocol 2 works at a combination of file level and block level >> granularity, >> which in practice should mean that you should be able to resume within >> the >> last 25MB that has been transferred. If it doesn't, there is a bug. >> > > I'm using protocol2. > This is my test: > 1- Run a backup of only one file with 1GB size. > 2- I kill the client at 90% (900MB is upload, and i can see the size of > the globale/data directory at 900MB). > 3- Wait 2h, and run a backup again > 4- 1GB is upload again to the server, and i can see the size of the > globale/data directory at 1900MB. > > Is it a bug ? How can i check ? If you were using 'working_dir_recovery_method=resume', then the desired behaviour is that it will deduplicate on the already transferred blocks. So, I would say that this a bug (or unimplemented feature). If you were using 'working_dir_recovery_method=delete', then it should delete unused blocks the next time a champ_chooser process for the dedup_group starts. If you have only one client, that effectively means the next time you run a backup. |