The transfer speed while backing up large files is great. In fact, it fully saturates the link or device's bandwidth.
However, AnyBackup seems to be processing between each file copy for quite a long time, which leads to a maximum transfer rate of ~2-3 files per second. This means that backing up large directories with thousands of small files (pictures, mp3s, etc.) takes really really long.
If there's something you could tune in order to improve the transfer speeds for small files, it would speed up the whole backup process by a pretty large factor.
I've attached a screenshot of the Task-Manager Performance tab. I'm backing up from a HW RAID 6 (9 drives -> easily capable of saturating any device's bandwidth) (D: E:) to a WD Green 2TB (in a Thermaltake BlacX 5G Duet USB 3.0) docking station (maximum transfer speeds of ~145MB/s) (F:).
Regards,
Patrick
There are a couple of things that can be done to reduce the overhead between files:
These are both actually pretty easy changes, here's a beta build, see how this impacts your backups. https://drive.google.com/file/d/0BxmOb_XO2QKkMUFVd1RsaWJEMU0/view?usp=sharing
This definitely impacted the transfer speed for small files. I've just seen ~45MB/s for pictures with an average size of 420KB. This is so much faster, thanks very much!
I'm curious though, since you've removed the free space update, will it still know that a drive is full and ask me to connect the next one?
For backing up it allocates files to drives based on free space before it ever starts copying files. For restore it allocates a content drive on the fly, but it keeps track of the amount of data it's written to each drive so it has a theoretical idea of what the free space is. In short, you shouldn't notice any behavioral changes, though let me know if that's not the case.
Also, I should add, at the end of the backup or restore for a given drive it will still update the free space, it just does it once per backup instead of once per file.
These changes will be in 1.1.8 when it's released.