Hello everyone
I understand the block size (256kb) can cause fragmentation, but I've got mostly large (>8Gb) files and still my parity drive is now full.
I've got 4x5TB drives, one of them being Parity and the other being data.
The situation is as follows:
Data Drive 1: 1305GB Free
Data Drive 2: 187GB Free
Data Drive 3: 189GB Free
Parity Drive 1: 8GB Free
..and I had to delete some files from one of the data drives in order to complete a sync, otherwise I would have been out of parity.
Isn't this strange?
Is there a way to 'shrink' the parity?
Running Snapraid 11.0 on Windows 7 btw
Thanks!
If you would like to refer to this comment somewhere else in this project, copy and paste the following link:
It is normal for parity file to remain larger after you have deleted files from the array because you will end up with unused parity blocks inside the parity file.
However that is not a problem since unused parity inside the parity file will be re-used when you add/sync new data. From a performance perspective it is completely negligble (we are talking a few seconds at most on operations that takes several hours).
But if you absolutely want to get rid of the unused parity blocks inside the parity file and thereby making it as small as possible you can do it by running sync with -R, --force-realloc like this: Snapraid sync -R
You also need to be aware that if something goes wrong (like a disk failure) while this is ongoing you will have permanent data loss.
If you would like to refer to this comment somewhere else in this project, copy and paste the following link:
8751 files should at most cause the parity file to be ~2 GB larger than the total amount of data on any data disk.
That leaves only two possible explanations that I can think of:
Corrupted file system on parity disk or incorrect partition size (effectively missing ~140 GB). You should be able to confirm situation like that by right clicking on the parity file, selecting properties and see how big the parity file is in bytes. If the file is around 4,950,000,000,000 bytes then this is NOT the problem. If it is closer to 4,800,000,000,000 bytes then that is the problem and you would need to repartition/reformat the drive to fix it.
One of the data disks has compression activated. Right click on the drive in My Computer and select properties. If you have a check mark in the check box that say compression. Then compression is activated and you can simply disable it by unchecking the box and pressing OK.
The reason why compression would cause parity file to be larger than expected is because compression is completely transparent in Windows. When reading the file it is presented as uncompressed to snapraid which then allocates parity for the file at uncompressed size.
Last edit: Leifi Plomeros 2019-01-08
If you would like to refer to this comment somewhere else in this project, copy and paste the following link:
found the culprit :( (Sorry, my fault of not noticing before)
There was an hidden folder with ~700k files on one of the disks
That was causing the massive overhead.
Using a smaller (128 or 64k) block size would help? Any side effect?
If you would like to refer to this comment somewhere else in this project, copy and paste the following link:
Two times (128 kiB) or four times (64 kiB) RAM requirement and content file size.
Do you really need that many tiny files in that folder? Can't you put most of them in ZIP or RAR archives?
If you would like to refer to this comment somewhere else in this project, copy and paste the following link:
Hello everyone
I understand the block size (256kb) can cause fragmentation, but I've got mostly large (>8Gb) files and still my parity drive is now full.
I've got 4x5TB drives, one of them being Parity and the other being data.
The situation is as follows:
..and I had to delete some files from one of the data drives in order to complete a sync, otherwise I would have been out of parity.
Isn't this strange?
Is there a way to 'shrink' the parity?
Running Snapraid 11.0 on Windows 7 btw
Thanks!
It is normal for parity file to remain larger after you have deleted files from the array because you will end up with unused parity blocks inside the parity file.
However that is not a problem since unused parity inside the parity file will be re-used when you add/sync new data. From a performance perspective it is completely negligble (we are talking a few seconds at most on operations that takes several hours).
But if you absolutely want to get rid of the unused parity blocks inside the parity file and thereby making it as small as possible you can do it by running sync with -R, --force-realloc like this:
Snapraid sync -R
You also need to be aware that if something goes wrong (like a disk failure) while this is ongoing you will have permanent data loss.
Thanks Leifi
Did that (took 8 hours to complete) and the situation didn't change much:
Btw, I HAD to force realloc otherwise a Sync would not complete (unless, as I did, I removed some of the data on the data disks)
Have you checked that you don't have old stuff in recycle bin on the parity disk?
How many files do you have on the disk with highest number of files on it?
(You can use snapraid status to find this information)
Last edit: Leifi Plomeros 2019-01-07
Hi
Recycle bin is set to 0 bytes on all drives
The disk with the highest number of files has 8751 files
8751 files should at most cause the parity file to be ~2 GB larger than the total amount of data on any data disk.
That leaves only two possible explanations that I can think of:
The reason why compression would cause parity file to be larger than expected is because compression is completely transparent in Windows. When reading the file it is presented as uncompressed to snapraid which then allocates parity for the file at uncompressed size.
Last edit: Leifi Plomeros 2019-01-08
found the culprit :( (Sorry, my fault of not noticing before)
There was an hidden folder with ~700k files on one of the disks
That was causing the massive overhead.
Using a smaller (128 or 64k) block size would help? Any side effect?
Two times (128 kiB) or four times (64 kiB) RAM requirement and content file size.
Do you really need that many tiny files in that folder? Can't you put most of them in ZIP or RAR archives?