Activity for Dump/Restore ext2/3/4 filesystem backup

  • Tim Woodall Tim Woodall modified ticket #88

    minor prompt issue

  • Tim Woodall Tim Woodall modified ticket #50

    EXT2 directory corrupted problem

  • Tim Woodall Tim Woodall modified ticket #155

    crash when restoring a symlink w/really long name (>MAXPATHLEN)

  • Tim Woodall Tim Woodall posted a comment on ticket #155

    I was not able to reproduce this particular issue - I don't actually see how a symlink can be >MAXPATHLEN - but I did find that the code incorrectly assumes the data is null-terminated. I've pushed a fix to v0.4b49-wip for that issue.

  • Tim Woodall Tim Woodall modified ticket #154

    restore -rN tries to lchown() links

  • Tim Woodall Tim Woodall posted a comment on ticket #154

    Will be fixed in the 0.4b49 release

  • Tim Woodall Tim Woodall posted a comment on ticket #154

    Actually I was wrong about the lchown. I was looking at chmod! But I have a fix for this entire issue, pushed to the v0.4b49-wip branch for now along with a testcase.

  • Tim Woodall Tim Woodall modified ticket #154

    restore -rN tries to lchown() links

  • Tim Woodall Tim Woodall posted a comment on ticket #154

    Can trivially reproduce the ftruncate issue: root@dirac:/build/dump-sf/testing/scripts/historical/dumps/x# ../../../../../restore/restore -r -f ../dump0.0.4b48.dmp root@dirac:/build/dump-sf/testing/scripts/historical/dumps/x# ../../../../../restore/restore -xN -f ../dump0.0.4b48.dmp You have not read any volumes yet. Unless you know which volume your file(s) are on you should start with the last volume and work towards the first. Specify next volume # (none if no more volumes): 1 ../../../../../restore/restore:...

  • Tim Woodall Tim Woodall modified ticket #178

    tag v0.4b37 is missing

  • Tim Woodall Tim Woodall modified ticket #158

    restore: : ftruncate: Invalid argument

  • Tim Woodall Tim Woodall posted a comment on ticket #158

    Closed assuming this is a long fixed restore -C bug. I have not encounted this problem during the 0.4b48 testing at all.

  • Tim Woodall Tim Woodall posted a comment on ticket #158

    It's not completely obvious to me whether this was restore -C testing or not but the error of calling ftruncate on a non-file when testing with restore -C was fixed in 2016.

  • Tim Woodall Tim Woodall modified ticket #116

    Dump 0.4b26 (at least on suse8.0) handles old dates wrong!!

  • Tim Woodall Tim Woodall posted a comment on ticket #116

    There is a 2038 time_t issue in dump which I will be addressing in a future release. Because of the desire to maintain backwards compatibility there is some significant complexity to dealing with this. I suspect at some point it will be necessary to move to dump 0.5 and break forwards compatibility (no 0.4b restore will be able to read 0.5 tapes) and if that has to happen I want to make sure that everything that is currently limited by the size of u_spcl is supported in the new tape format.

  • Tim Woodall Tim Woodall modified ticket #181

    Header block is incorrect when system is not sunos or __linux__

  • Tim Woodall Tim Woodall modified ticket #171

    dump fails with -fno-common

  • Tim Woodall Tim Woodall posted a comment on ticket #171

    Fixed in 0.4b48

  • Tim Woodall Tim Woodall modified ticket #150

    Display Problem II on Large Dumps

  • Tim Woodall Tim Woodall posted a comment on ticket #150

    Thanks for the report. Fixed in 0.4b48.

  • Tim Woodall Tim Woodall modified ticket #144

    Crash on compare restore with extended attributes

  • Tim Woodall Tim Woodall posted a comment on ticket #144

    I haven't investigated this issue in detail but I've been unable to reproduce any crashes restoring extended attributes using v0.4b48. I'm assuming it is now resolved.

  • Tim Woodall Tim Woodall modified ticket #169

    restore command brings segfault

  • Tim Woodall Tim Woodall posted a comment on ticket #169

    There was serious issues with dump 0.4b45. I have been unable to restore any compressed dump made with v0.4b45. All these issues were resolved in 0.4b46.

  • Tim Woodall Tim Woodall modified ticket #177

    restore -C fails on ext4 for long symlinks that use extents.

  • Tim Woodall Tim Woodall modified ticket #162

    file corruption

  • Tim Woodall Tim Woodall posted a comment on ticket #162

    Resolved in 0.4b48

  • Tim Woodall Tim Woodall modified ticket #168

    Problem with dump/restore xattrs

  • Tim Woodall Tim Woodall posted a comment on ticket #168

    Resolved in 0.4b48

  • Tim Woodall Tim Woodall modified ticket #175

    dump restores a qcows (sparse) drive as a blank drive

  • Tim Woodall Tim Woodall posted a comment on ticket #175

    I believe this was caused by one of the ext4 bugs fixed in 0.4b48, most likely the limit of 2^32 blocks on a file system.

  • Tim Woodall Tim Woodall modified ticket #174

    corrupted dumps of 2TB+ filesystems....

  • Tim Woodall Tim Woodall posted a comment on ticket #174

    Many thanks for the reports and patches. I likely would never have found these issues as I have no filesystems this big. Resolved in 0.4b48

  • Tim Woodall Tim Woodall modified ticket #176

    dump records garbage for Uninit EXT4_EXTENTS_FL.

  • Tim Woodall Tim Woodall posted a comment on ticket #176

    Resolved in 0.4b48

  • Tim Woodall Tim Woodall modified ticket #177

    restore -C fails on ext4 for long symlinks that use extents.

  • Tim Woodall Tim Woodall posted a comment on ticket #177

    Resolved in 0.4b48

  • Tim Woodall Tim Woodall modified ticket #179

    dump fails to dump small files on systems with ext4 feature inline data

  • Tim Woodall Tim Woodall posted a comment on ticket #179

    Resolved in 0.4b48

  • Tim Woodall Tim Woodall modified ticket #180

    restore -C with a -D path of >64 characters fails.

  • Tim Woodall Tim Woodall posted a comment on ticket #180

    Resolved in 0.4b48

  • Tim Woodall Tim Woodall updated merge request #3

    Many fixes along with testcases for any of them.

  • Tim Woodall Tim Woodall posted a comment on merge request #3

    Changes are merged and pushed via a different branch. closing this.

  • Dump/Restore ext2/3/4 filesystem backup Dump/Restore ext2/3/4 filesystem backup released /dump/0.4b48/dump-0.4b48.tar.gz

  • Dump/Restore ext2/3/4 filesystem backup Dump/Restore ext2/3/4 filesystem backup released /dump/0.4b48/CHANGES.txt

  • Tim Woodall Tim Woodall modified a comment on merge request #3

    I've created branch v0.4b48release in my fork where I've cleaned up and updated NEWS alongside the commits. Other than the "make 0.4b48 release" commit -c.f. cb48d59c9e2a9997ac82e1eef117614b2261d737 where configure.ac needs updating, and the release date in NEWS, v0.4b48release branch is ready for release.

  • Tim Woodall Tim Woodall posted a comment on merge request #3

    I've created branch v0.4b48release in my fork where I've cleaned up and updated NEWS alongside the commits. Other than the "make 0.4b48 release" commit -c.f. cb48d59c9e2a9997ac82e1eef117614b2261d737 where configure.ac needs updating, the release date in NEWS, v0.4b48release branch is ready for release.

  • Tim Woodall Tim Woodall created ticket #181

    Header block is incorrect when system is not sunos or __linux__

  • Tim Woodall Tim Woodall posted a comment on merge request #3

    Note that fixing the sparse hole only works if the hole is a multiple of the filesystem block size. If it is not then dump reports the problem and how to fix it: ./sparse-1: Found 1 missing blocks - assuming version 0.4b42/43 dump sparse file bug Restore file to a filesystem with a block size no more than 1024 to get a fixed file ./sparse-2: Found 2 missing blocks - assuming version 0.4b42/43 dump sparse file bug Restore file to a filesystem with a block size no more than 2048 to get a fixed file...

  • Tim Woodall Tim Woodall posted a comment on ticket #175

    I realize this is a very old ticket but I suspect this is due to dump not correctly reading ext4 uninited blocks. Instead of writing zeros to the tape, it wrote garbage. It would be interesting to know whether the places where the corruption happen, the clean file has zeros. It's also possible that this was a bug due to dump not reading blocks with an address >2^32. This implies a filesystem of at least 4T and 16T if the block size is the more usual 4K. Both of these issues are fixed in the merge...

  • Tim Woodall Tim Woodall posted a comment on merge request #3

    Added another fix to restore sparse files written with versions 0.4b42 and 0.4b43 correctly. It's unlikely that many people still have dumps this old but it cleans up the code a bit anyway. 0.4b42 was cut in Jun 2009 and 0.4b44 which had the fix was cut in Jun 2011.

  • Tim Woodall Tim Woodall posted a comment on merge request #3

    The fix to support dumps of over 4TB uses four of the reserved for future use bytes from the tape header. But the code can correctly handle all dumps written with old versions and old versions will handle these dumps correctly too except for the case where this rollover would occur - dumps written with the old code also had this problem during restore.

  • Tim Woodall Tim Woodall modified a comment on merge request #3

    Some more changes: Fix all the compiler warnings. Now compiles cleanly with -W -Wall. Support a -q flag in restore same as dump Improvements on sparse file restoring and comparing. The old code did a seek on every tape block. You can now restore a 5TB sparse file (negligible data) in around 5 minutes rather than many hours. Fix the problem of volumes rolling over at 2^32 blocks. DUMP: Closing /build/dump-sf/testing/scripts/tmp.pQhoE3StJG/tmp.gbuJUBR09x/tape DUMP: Volume 5 completed at: Sun Sep 29...

  • Tim Woodall Tim Woodall posted a comment on merge request #3

    Some more changes: Fix all the compiler warnings. Now compiles cleaning with -W -Wall. Support a -q flag in restore same as dump Improvements on sparse file restoring and comparing. The old code did a seak on every tape block. You can now restore a 5TB sparse file (negligible data) in around 5 minutes rather than many hours. Fix the problem of volumes rolling over at 2^32 blocks. DUMP: Closing /build/dump-sf/testing/scripts/tmp.pQhoE3StJG/tmp.gbuJUBR09x/tape DUMP: Volume 5 completed at: Sun Sep 29...

  • Tim Woodall Tim Woodall posted a comment on merge request #3

    Redone the merge branch. Dropped the testcases - they're still available in branch testcases but as I've been working on this I've been changing a testcase, changing the code which was leading to a deep branch of changes. So for now I'm keeping the testcases separate. full-regression.sh now tests a l0 and l1 dump while historical-regression.sh tests a restore of a l0 and l1 dump of every version of dump from 0.4b5 through to 0.4b47 except 0.4b45 which I get a sigsegv when I try to run dump. While...

  • Tim Woodall Tim Woodall posted a comment on merge request #3

    Added a couple of new test cases: xattrs-regression.sh - which tests the failure modes where tape and disk disagree. Some simplification of the code as a result. historical-regression.sh - this runs the latest restore on a dump from every version from 0.4b5 and confirms that it restores the correct data.

  • Tim Woodall Tim Woodall posted a comment on merge request #3

    Massive speedup of the big block addr tests by limiting the number of inodes on the test fs. run_all_tests.sh now completes in under a minute.

  • Tim Woodall Tim Woodall posted a comment on merge request #3

    Also fixes #162 which is a duplicate of #176

  • Tim Woodall Tim Woodall created merge request #3 on Code (git)

    Many fixes along with testcases for any of them.

  • Tim Woodall Tim Woodall created ticket #180

    restore -C with a -D path of >64 characters fails.

  • Tim Woodall Tim Woodall created ticket #179

    dump fails to dump small files on systems with ext4 feature inline data

  • Tim Woodall Tim Woodall created ticket #178

    tag v0.4b37 is missing

  • Tim Woodall Tim Woodall posted a comment on ticket #174

    A somewhat modified testcase that runs a bit quicker.

  • Tim Woodall Tim Woodall posted a comment on ticket #174

    There was a minor bug in the original patches which I've fixed in the attached patch. I had to add a bit of extra logging to actually show that the test case was duming an EA block >2^32 dumping EA (block) in inode #13 block=4312435892

  • Tim Woodall Tim Woodall created ticket #177

    restore -C fails on ext4 for long symlinks that use extents.

  • Tim Woodall Tim Woodall posted a comment on ticket #176

    I've just discovered #162 which this is a duplicate of.

  • Tim Woodall Tim Woodall posted a comment on ticket #162

    attaching patch here too.

  • Tim Woodall Tim Woodall posted a comment on ticket #162

    I've just reported this as #176 and attached a patch there that fixes this.

  • Tim Woodall Tim Woodall posted a comment on ticket #176

    Testcase I was using to test with. This is not intended to be a "works anywhere" testcase. It will definitely need tweaking to work for anybody else (It requires root to run and will potentially trash filesystems so should be understood before running!)

  • Tim Woodall Tim Woodall posted a comment on ticket #176

    I haven't investigated at all but this could possibly be the cause of #175

  • Tim Woodall Tim Woodall created ticket #176

    dump records garbage for Uninit EXT4_EXTENTS_FL.

  • Tim Woodall Tim Woodall posted a comment on ticket #168

    Attached is a fix. There was a fix in debian for a very long time for the same underlying issue for files. the difference was that for files, restore worked but a restore -C failed. That half fix meant that restore -C worked for directories even though the restore didn't. This patch combines the existing patch in debian with the fix for the xattr for dirs such that restore and restore -C both work correctly.

  • Greg Oster Greg Oster posted a comment on ticket #174

    You're most welcome! Thanks for coming up with a small test case -- I switched from 'dump' to 'restic' for backups at about the same time that I reported the issue, and so havn't had the need to chase this problem further. Later... Greg Oster

  • Tim Woodall Tim Woodall posted a comment on ticket #174

    Thanks for this! I've managed to generate a test case that doesn't require terrabytes of data, only about 3GB of diskspace to reproduce: (This assume you have no loop devices in use - it will trash them if there are!) mkdir -p d1.mnt mkidr -p d2.mnt mkdir -p big rm -f d1 truncate -s 3G d1 losetup -f d1 mkfs.ext4 /dev/loop0 losetup -d /dev/loop0 rm -f d2 truncate -s 1G d2 losetup -f d2 mkfs.ext4 /dev/loop0 losetup -d /dev/loop0 mount -o loop d1 d1.mnt/ mount -o loop d2 d2.mnt/ truncate -s 15T d1.mnt/pv...

  • Todd Todd posted a comment on ticket #175

    What gparted sees

  • Todd Todd created ticket #175

    dump restores a qcows (sparse) drive as a blank drive

  • dreamlayers dreamlayers posted a comment on ticket #162

    Thank you for investigating this. It is easy to create a file with a single unwritten extent using the fallocate command, like: fallocate -l 512 foo. Such a file will trigger this bug.

  • fwxx fwxx posted a comment on ticket #162

    I think, I encountered the same bug on slackware64 15.0 using dump version 0.4b47 from the slackbuilds.org site (e2fsprogs-1,46.5). When checking the restored dump of that tool, some files had a "garbage" block of data somewhere. e.g. at the end. Comparing that with debugfs on the original ext4 filesystem, revealed that the blocks on disk are very identical to the restored file, but dump seems not to honor some new(?) filesystem block marker, which can mark a filesystem block as "unwritten". Example:...

  • Greg Oster Greg Oster modified a comment on ticket #174

    These changes are necessary, but not sufficient. A multi-tape dump looks like it is corrupting a file that spans two tapes. The error seen is: Incorrect block for <filename> at 11432470600 blocks Incorrect block for <filename> at 11432470601 blocks ... Incorrect block for <filename> at 11432470790 blocks Incorrect block for <filename> at 11432470791 blocks</filename></filename></filename></filename> When the 16TB restore finishes I'll know if this is the only file that is corrupt. [UPDATE: 16TB restore...

  • Greg Oster Greg Oster modified a comment on ticket #174

    These changes are necessary, but not sufficient. A multi-tape dump looks like it is corrupting a file that spans two tapes. The error seen is: Incorrect block for <filename> at 11432470600 blocks Incorrect block for <filename> at 11432470601 blocks ... Incorrect block for <filename> at 11432470790 blocks Incorrect block for <filename> at 11432470791 blocks</filename></filename></filename></filename> When the 16TB restore finishes I'll know if this is the only file that is corrupt. I suspect that...

  • Greg Oster Greg Oster posted a comment on ticket #174

    These changes are necessary, but not sufficient. A multi-tape dump looks like it is corrupting a file that spans two tapes. The error seen is: Incorrect block for <filename> at 11432470600 blocks Incorrect block for <filename> at 11432470601 blocks ... Incorrect block for <filename> at 11432470790 blocks Incorrect block for <filename> at 11432470791 blocks</filename></filename></filename></filename> When the 16TB restore finishes I'll know if this is the only file that is corrupt. I suspect that...

  • Greg Oster Greg Oster posted a comment on ticket #174

    8.5TB of data successfully dumped/restored with the submitted patches in use. The dump/restore of this data set has 1000's of validation errors without the patches. Later... Greg Oster

  • Greg Oster Greg Oster posted a comment on ticket #150

    Attached is a small diff to fix the percentages. Later... Greg Oster

  • Greg Oster Greg Oster posted a comment on ticket #174

    My math here might be wrong... if 4294967296 is the maximum logical block address, then the corruption wouldn't be seen until LBAs of over that value.. I.e. for a block size of 4K, that would mean on filesystems larger than 16TB... which would help explain why this hasn't been reported before. Later... Greg Oster

  • Greg Oster Greg Oster created ticket #174

    corrupted dumps of 2TB+ filesystems....

  • Mike Frysinger Mike Frysinger modified ticket #173

    RMT - extend 's' command should consume '\n'?

  • Mike Frysinger Mike Frysinger posted a comment on ticket #173

    glad you were able to figure it out

  • Bebu sa Ware Bebu sa Ware posted a comment on ticket #173

    Please close. Should have read the fine manual :) Both the 'S' and 's' commands aren't terminated by '\n' So one way to test is with the V1 version 'v' command after the 's' cmd eg retrieve mt_blkno sBv A1 A1

  • Bebu sa Ware Bebu sa Ware created ticket #173

    RMT - extend 's' command should consume '\n'?

  • Dump/Restore ext2/3/4 filesystem backup Dump/Restore ext2/3/4 filesystem backup released /dump/0.4b47/dump-0.4b47.tar.gz

  • Dump/Restore ext2/3/4 filesystem backup Dump/Restore ext2/3/4 filesystem backup released /dump/0.4b47/CHANGES.txt

  • Mike Frysinger committed [759c5d]

    update a few more URLs to https

  • Mike Frysinger committed [9175b6]

    convert all files to UTF-8

  • Mike Frysinger committed [d8edf1]

    update dump site URLs

  • Mike Frysinger committed [cb48d5]

    make 0.4b47 release

  • Mike Frysinger Mike Frysinger modified ticket #27

    symlink file times aren't restored

  • Mike Frysinger Mike Frysinger posted a comment on ticket #27

    thanks, merge in 575766acd134cab74dc68bdd7d6808a338fc9a37

  • Mike Frysinger committed [575766]

    restore: update symlink timestamps

  • Mike Frysinger Mike Frysinger modified ticket #25

    restore -C calls ftruncate() on non-files

  • Mike Frysinger Mike Frysinger posted a comment on ticket #25

    thanks, fixed in bd61936ee873fe4d857f858b680efb4f3ce75485

1 >