Learn how easy it is to sync an existing GitHub or Google Code repo to a SourceForge project! See Demo

Close

#156 Blocking-Logic after writing dumpfile to tape

open
nobody
restore (48)
5
2012-04-04
2012-04-04
Patrik Schindler
No

Modern tape drives tend to outperform common LAN speeds easily. Even local disk arrays may fail to keep a sustained rate while dump collects inodes and their data blocks.

For that reason, I want to implement a "staged" backup solution. Stage 1 dumps remote machines and local filesystems into files on a disk array. Stage 2 would be to use cat, dd or whatever to write those dump files to the tape drive.

I experimented with a lot of different parameters but I always fail to read the dump with restore. It complains about tape is not a dump tape, decompression errors and stuff, at the very beginning of restore -i -b ... Tried with fixed and variable blocksizes on the tape drive and different blocksizes with dump (and the same one with restore, obviously). With 256K-Blocks, the tape driver begins to throw errors like that:

Mar 21 12:40:11 backup-linux kernel: [74497.602212] st0: Can't allocate 262148 byte tape buffer.

I digged into this issue a lot deeper and found: Without compression (dump -z) one can write dumpfiles to tape utilizing dd to an autoblocking tape, as long as the blocksize used with dd is an even multiple of the blocksize used with dump (-b). Dumps written to tape like that can be read directly from the tape for restore reasons.

As soon as compression is active (-z within dump) this fails with the above errors. Now I don't know if that could be a bug in dump/restore or I missed something.

Thanks for your Attention!

Discussion