parity /snapraid/storage_p1/snapraid.parity
2-parity /snapraid/storage_p2/snapraid.2-parity
content /opt/.snapraid/snapraid.content
content /snapraid/storage_d1/.snapraid/snapraid.content
content /snapraid/storage_d2/.snapraid/snapraid.content
data storage_d1 /snapraid/storage_d1
data storage_d2 /snapraid/storage_d2
data storage_d3 /snapraid/storage_d3
data storage_d4 /snapraid/storage_d4
data storage_d5 /snapraid/storage_d5
data storage_d6 /snapraid/storage_d6
data storage_d7 /snapraid/storage_d7
data storage_d8 /snapraid/storage_d8
blocksize 64
hashsize 8
autosave 0
storage_d devices are XFS.
storage_p is ext4 parity.
Ubuntu 20.04.3 LTS
I reduced blocksize due to the nature of the files being relatively small, which would mess up the storage use later on.
And by proxy I had to reduce hashsize to keep the RAM size in check. If this is the cause of the problem, I can increase it since I have the resources. But the system has at least 10M files and growing fast.
When I hash the file, it has the right has (got a db with all that information).
I'm kinda puzzled where this error came from, since this never happened.
File has no hardlinks and no known problems.
Help? File is ok, disk smart is ok, filesystem has no inconsistencies, everything is almost default except blocksize and hashsize.
If you would like to refer to this comment somewhere else in this project, copy and paste the following link:
I copied that file outside the disk and added it to the exclusion, to make snapraid run, and it runs.
¯\(ツ)/¯ don't get it, but at least I can sync until the problem is figured out
If you would like to refer to this comment somewhere else in this project, copy and paste the following link:
Ok, for whatever reason it had a problem with a file in storage_d5.
I do the same thing. Copy the file, add it to exclusion, run snpraid sync aaaaand some other file now has a problem in storage_d1. Which had not shown up minutes ago, when snapraid happily scanned d1-d4 before.
If you would like to refer to this comment somewhere else in this project, copy and paste the following link:
Ok... I had an application running that was hashingthe files and moving them to another directory.
And that was the cause of this problem, probably due to the way XFS handles the metadata.
Still strange. But it seems to work now without problems, when the hashing application is not running.
Last edit: Suika 2021-10-23
If you would like to refer to this comment somewhere else in this project, copy and paste the following link:
Error:
Config:
storage_d devices are XFS.
storage_p is ext4 parity.
Ubuntu 20.04.3 LTS
I reduced blocksize due to the nature of the files being relatively small, which would mess up the storage use later on.
And by proxy I had to reduce hashsize to keep the RAM size in check. If this is the cause of the problem, I can increase it since I have the resources. But the system has at least 10M files and growing fast.
The file in question is nothing special:
When I hash the file, it has the right has (got a db with all that information).
I'm kinda puzzled where this error came from, since this never happened.
File has no hardlinks and no known problems.
Help? File is ok, disk smart is ok, filesystem has no inconsistencies, everything is almost default except blocksize and hashsize.
I copied that file outside the disk and added it to the exclusion, to make snapraid run, and it runs.
¯\(ツ)/¯ don't get it, but at least I can sync until the problem is figured out
Ok, for whatever reason it had a problem with a file in
storage_d5
.I do the same thing. Copy the file, add it to exclusion, run
snpraid sync
aaaaand some other file now has a problem instorage_d1
. Which had not shown up minutes ago, when snapraid happily scanned d1-d4 before.Ok... I had an application running that was hashingthe files and moving them to another directory.
And that was the cause of this problem, probably due to the way XFS handles the metadata.
Still strange. But it seems to work now without problems, when the hashing application is not running.
Last edit: Suika 2021-10-23
yes I have found you need to be careful when running sync as any files being added, changed, moved during the sync will cause issues.