#197 Speed and estimated time not correct with backup dir

closed
nobody
None
2013-02-13
2011-03-15
Hans Schmidts
No

I use a configuration with "User-defined directory" for "Deletion handling". With this configuration I noticed speed going to 0 and estimated time to several days, so I aborted the operation, but found same problem after next start. Then I saw a big changed file and I'm pretty sure that this causes the problem: With "User-defined directory" FFS must copy at least the files form one side to that directory (instead of simply moving the file to recycler or if it's on the same drive), but this copy is not counted, not as total amount of data and not for speed calculation, and so the speed goes to 0 and time to infinite during the copy of big files to backup dir.

Tested with FFS 3.13 in Win7Ult-64

Discussion

  • Zenju
    Zenju
    2011-03-15

    Right now, file size is not counted for files that are to be deleted. For "recycle bin" and "delete permanently" and "user-defined dir (on same drive" this is fine since the operation is almost instantaneous.
    However "user-defined (on different drive)" involves a full copy operation, that should be respected by progress indicator.

     
  • Zenju
    Zenju
    2013-02-13

    The new version FFS v5.12 implements a comprehensive and consistent model handling statistics before and during sync:

    First it is acknowledged that it is not possible to 100% estimate a priori what will happen during synchronization:

            1. detected file can be moved -> fallback to copy + delete
            2. file copy, actual size changed after comparison
            3. file contains significant ADS data, is sparse or compressed
            4. auto-resolution for failed create operations due to missing source
            5. directory deletion: may contain more items than scanned by FFS (excluded by filter) or less (contains followed symlinks)
            6. delete directory to recycler: no matter how many child-elements exist, this is only 1 item to process!
            7. file/directory already deleted externally: nothing to do, 0 logical operations and data
            8. user-defined deletion directory on different volume: full file copy required (instead of move)
            9. Binary file comparison: if files differ at the first few bytes, the result is already known
            10. Error during file copy, retry: bytes were copied => increases total workload!
    

    Secondly it is acknowledged that it is not possible to count the number of disk accesses. The "object count" therefore can only and will only count the number of logical operations (FFS create/update/delete). Consequently it is not useful for performance estimates and FFS will use only "bytes" for this purpose.

    Thirdly FFS will adapt "total bytes/objects" during sync, should it find a deviation from the initial expectation without undoing the already reported objects/bytes transferred:
    e.g. a file copy errors out in the middle of the operation. Upon "retry" the reported bytes are not undone anymore, but instead the "expected total bytes" are increased by the amount of the partly copied file. This fits logical expectations: "due to the error the total load is indeed higher than anticipated" and does not corrupt performance measurements anymore which previously lead to artifacts such as a "negative speed".

    => final observation: Due to these new refinements we can keep a single statistical model for usage in both sync preview and progress indicator.

    With regards to the original problem reported in this ticket this means that a user-defined directory is not considered when estimating statistics, in particular "deletion handling" is ignored and the number of parent and child objects to be deleted are accumulated and added to "item count" while "0 bytes" are expected for "bytes".

    During sync it will turn out that "0 bytes" is not correct and the estimated total will increase as files are moved to the external disk considering the "real" byte count handling all corner cases 2, 3, and 5 correctly. With regards to "object count" it will consider the number of actually executed move operations, each of which counts as one (NOT 2, as one might think at first for copy + delete), carefully modelling the logical view.

    So in a nutshell the new FFS statistics have a precise definition are implemented insanely accurate and adapt dynamically during sync to all unexpected circumstances.

     
  • Zenju
    Zenju
    2013-02-13

    • status: open --> closed