I currently have an array of 14 disks which all use ext4 but I want to convert them to btrfs. I tried it with one disk as a test but snapraid detected the disk as having been rewitten and I needed to run sync with the force-empty option.
Is it possible to do this wiout having to recalculate everything? I would have though that because snapraid works at a file level that the data shouldn't change with the format but maybe I'm wrong.
If you would like to refer to this comment somewhere else in this project, copy and paste the following link:
Your thought is correct, so there is something else than file system forcing the full resync.
My best guess is that you have accidentally changed order of the data disks in the config and/or that relative paths (from snapraid data disk perspective) have been changed.
If you would like to refer to this comment somewhere else in this project, copy and paste the following link:
Given that I haven't found yet any graphical (and I looked for both X and ncurses, although I didn't follow up much on the X ones) linux file manager that preserves sub-second timestamps I think there's a high chance the metadata (timestamp) wasn't transfered properly so snapraid thinks those are other files.
If you would like to refer to this comment somewhere else in this project, copy and paste the following link:
I'm guessing you used btrfs-convert (part of btrfs-tools) to do the conversion. Is that correct?
I've never used it, but the question is whether it preserves the nanosecond timestamps. I guess not. But it would be interesting to know for sure. You can test it with the stat program as Andrea suggested.
If you would like to refer to this comment somewhere else in this project, copy and paste the following link:
I currently have an array of 14 disks which all use ext4 but I want to convert them to btrfs. I tried it with one disk as a test but snapraid detected the disk as having been rewitten and I needed to run sync with the force-empty option.
Is it possible to do this wiout having to recalculate everything? I would have though that because snapraid works at a file level that the data shouldn't change with the format but maybe I'm wrong.
Your thought is correct, so there is something else than file system forcing the full resync.
My best guess is that you have accidentally changed order of the data disks in the config and/or that relative paths (from snapraid data disk perspective) have been changed.
Hi Dexter,
Ensure that the directory structure inside the mount point is exactly as before.
Another possibility is that you used a copy program that is not able to correctly copy the timestamps.
You can check that with the "stat" command:
The Modify: line must match exactly in both ext4 and btrfs filesystems for the same file.
Ciao,
Andrea
Given that I haven't found yet any graphical (and I looked for both X and ncurses, although I didn't follow up much on the X ones) linux file manager that preserves sub-second timestamps I think there's a high chance the metadata (timestamp) wasn't transfered properly so snapraid thinks those are other files.
I'm guessing you used
btrfs-convert(part ofbtrfs-tools) to do the conversion. Is that correct?I've never used it, but the question is whether it preserves the nanosecond timestamps. I guess not. But it would be interesting to know for sure. You can test it with the
statprogram as Andrea suggested.Yes, I used btrfs-convert to convert the disk. I guess that must be what it is.
I just checked and it looks like that's what the problem is:
Size: 129011 Blocks: 256 IO Block: 4096 regular file Device: 38h/56d Inode: 24510731 Links: 1 Access: (0775/-rwxrwxr-x) Uid: ( 1000/ kane) Gid: ( 1003/ pool-rw) Access: 2016-04-27 22:11:51.000000000 +0800 Modify: 2015-06-14 08:31:27.000000000 +0800 Change: 2016-08-26 17:11:42.000000000 +0800 Birth: -I'm guessing there's not a way to tell snapraid to ignore that?