no, since the bad block marking is inivisble to the interface to the host.
use the -p option of scrub, takes either a percentage or the string "new".
try ionice, not nice since I/O is the problem.
PAM can limit size of a single file (and many more things). ulimit -a will show some limits of your process.
i can tell you the 8.7TB (or 8.7*10e12): that is 8 * 1024 * 1024 * 1024 * 1024= 879609302208 Bytes or 8TiB (or signed 32bit times 4KB sector size). that is exactly the size of your first parity file. one small letter can make really a difference. good thing, that it is working. I had the idea, because if you use split parity at a time and later on get a large disc, you may copy both split parity files to the big disc and this works.
i can tell you the 8.7TB (or 8.710e12): that is 8102410241024*1024= 879609302208 Bytes or 8TiB (or signed 32bit times 4KB sector size). that is exactly the size of your first parity file. one small letter can make really a difference. good thing, that it is working. I had the idea, because if you use split parity at a time and later on get a large disc, you may copy both split parity files to the big disc and this works.
Maybe you can use split parity with two parity files on same disc. Never tried, but it may work.
maybe you can ignore them in the array. SnapRAID has an option to ignore directories or files (probably an own directory for always changing files would be the best) in its configuration file.
maybe when you snced the 60TB array, no Plex, VNC and other programs were running? These programs need RAM too.
additional RAM will definitly help. other way would be to increase swap space (big slow down) or shut down any other program (like plex, VNC, ...) to lower RAM usage of the system. how many data and parity disks do you have? seems to be a quite large SnapRAID array.
have you checked syslog or messages in /var/log? how much memory do you have and how much is available, when job was killed? do you have core dumps enabled and is there a core dump? what else is running and using memory?
you should be able to use the disk with the same SnapRAID configuration at the new computer without problems. if you had a very old snapraid version, you may have to use a intermeddiate version for content file conversion.
no, SnapRAID works on files (on data disk/partition) and stores parity in a single file (on parity disk/partition).
yes make the parity partition a little bit larger due to waste (256KB block size) and create a second unprotected partition on the parity drive.
SnapRAID not, buf some other software (like primocache). SnapRAID is just an offline RAID with manuakl synchornization.
as you know, SnapRAID works on blocks. Each file is divided into blocks of 256KB, which is then used to calculate parity. In case there are many small files, there may be a lot of waste. Just think of 1KB files which are expanded to one block. In your case, about 73GB is wasted on d1 and almost 240GB wasted on d2. That is, why the parity file is much larger than the used space on each disk.
what does "snapraid status" say?
yes, because all data drives are used to create parity data. removing a data drive modifies all parity information. For adding a drive, it depends how much data is already on that drive. in case, that you will later add the RMA´ed drive again to the array, you can restore the data for that drive from parity data. But of course, you are not allowed to do any sync in between. There is more information about that in the FAQ section.
fast parity drives will not speed up recovery process, since all data disks are also involved (to calculate parities for correction/recovery). So the speed of recovery will be defined by the slowest drive in the array LTO drives are backup devices. SnapRAID needs disk devices for parity.
my second parity is also split parity, config line is: 2-parity /srv/dev-disk-by-label-Parity2a/snapraid.2-parity,/srv/dev-disk-by-label-Parity2b/snapraid.2-parity if you have three or four drives, just add all disks seperated by comma. if its working correctly, you will see in "snapraid status" in "Wasted" almost zero values as usual.
my second parity is also split parity, config line is: 2-parity /srv/dev-disk-by-label-Parity2a/snapraid.2-parity,/srv/dev-disk-by-label-Parity2b/snapraid.2-parity if its working correctly, you will see in "snapraid status" in "Wasted" almost zero values as usual.
it is possible to use 4 * 3TB (or 3 * 4TB) drives as split parity (allow parity for 12TB). you will need actual version of SnapRAID (feature was implemented in v11).
how much main memory due you have in your computer? Swap is used if the computer gets out of main memory (which slows done like hell). SnapRAID uses in your case about 5GB of main memory (for about 1.4TB of data). I see no problem here. can you post a df.
with OMV4/5 you change much more than only SnapRAID, e.g. Linux kernel, all programs, environment, ... maybe one of these changes is the problem (most likely the kernel). have you testet OV3 with SnapRAID 11.3?
please check the FAQ. There is no UUID support for ZFS.
this could be the problem. usually you need special options to preserve all file attributes. a good way is also, to use rsync as copy program.
maybe the copy did not restore the timestamp, operation depends on command line flags? which command exactly did you use to copy?
did you do some memory tests? SnapRAID is quite stressy to memory and disks (DMA). And newer versions of the Linux kernel may improve performance (disk/memory), which could trigger the problem.
did you do some memory tests? SnapRAID is quite stressy to memory and disks (DMA).
what does snapraid diff says now. still some deleted files? do you have the output of sync and scrub available? maybe you are right, that the srt file was deleted after ´sync was already started...
snapraid scrub checks, that all files are correct bz using a has over each file. The files to check are read from the snapraid.content file. Since you removed the file, snapraid complains, since the checksum is wrong for sure. you will have to run snapraid sync in advance to solve this problem. It is no problem for added files, since they are not in the snapraid.content file.
a new disk means, that all others disk must be read, too, since parity is always over all disks. data read speed per second depends only on the number of disk have data and are used for parity calculation. I seems, that in your example, not all disks were filled equally.
the disk rate depends mainly how many disk are used for parity generation. Just think a full and an almost empty disk. It starts with data from both disk and continues then with data only of the full disk (lower data rate). There is nothing wrong with this. The CPU usage also depends on this for parity calculation.
if you add an additional disc, all parity must be recalculated for the size of the disk data (since one more source, empty space does not count). Since then you have nine disks, it would take more than a week to sync everything (a full ninth disk). the good news, scrubing will still be only for delta.
the scrubbing is done to detect silent errors on protected data. scrubbing may detect errors which could be fixed with help of the parity. just reading the data would result in reading wrong data. see http://www.snapraid.it/faq#guidelines and http://www.snapraid.it/faq#checksuminfo
since only the first sync takes so much time, any further sync should be fast (since only delta is handled). Same is true for scrubbing (if everything was already scrubbed at least once except delta). You should scrubt the array, maybe 5% every night? than everything would be scrubbed in 3 weeks. how is your bay connected to the computer? I think this connection is making snapraid so slow. 6 days for 70TB means about 141MB/s. That is usually the speed of a single disk. Normally snapraid accesses...
that is, why you should do a "scrub -p new" after a sync immediatly (delta sync, same speed for both). and later you should do a "scrub -p <num>", which will scrub a percentage of the array. This should be done to detect silent bit errors time by time. A "scrub -p 1" should work for about one and a half hour.</num>
you can use "scrub -p 5" for scrubbing 5% of the data. if you do "scrub - p new" all data will be scrubbed since you did it never ('new' means just scrub just added/modified data). The percentage scrubbing should be done time by time.
after a successful "sync", you shoud make a check with "scrub - p new". This verifies, that everything was syncd correctly. SnapRAID should recognize, if some files were moved. Sometimes it is required to do a "touch" command in snapraid (to set sub second timestamp).
one more question: was at any earlier time, one of the data disks almost filled? The parity file never shrinks, it just grows. this is no problems, because SnapRAID knows which parts of the parity file are used for which data files, so it will only grow if the size of data files is larger than the current size of the parity file.
the difference of file system should be no problem. The wasted space for the data disk is normal (can be quite high if you have a lot of very small files). What kind of files do you have stored on data disk? are there any sparse files? You can check this "du -s --apparent-size " vs. "du -s". The first should be smaller than the second one. can you check the following command on all disks: tune2fs -l /dev/sd?? | grep 'Reserved block count' This will show reserved space for root user only. maybe there...
Hi, check if there are any other files on the parity disk. The parity disk is not shown in status. for data disk, check if there are some files excluded from SnapRAID and so not counted. depending on file system, there may be some part of the disk reserved for root only. df shows the space available to user.
Hi, check if there are any other files on the parity disk. The parity disk is not shown in status. for data disk, check if there are some files excluded from SnapRAID and so not counted.
No, your setup will not work. Data and parity are not allowed to be on the same physical disk. If this single disc dies, you would loose two parity bits (one from data and one from parity) => if two disc dies you may be not recoverable, since up 3 or 4 bits of parity are missing.
for better performance, do not use USB at all. I think 220MByte over all disks is already at limit (there is overhead in USB, switching USB devices, ...).
you have to do so, if you added files to snapraid array and snapraid sync tells you to do so.
you have to do so, if you added files to snapraid array and snapraid sync tells you to do so.
you should run the command snapraid touch. It will set subsecond timestamp and will allow snapraid to check for moved files more exactly.
todo: just check after adding the new disc in the GUI, that the 2-parity in the snapraid configuration still has both disc. Probably, the second will be removed by the GUI. Just add it again.
let the disc turn off after some time. in normal operation the paritz discs are not needed and only the data discs which are currently in use need to be on. thats one of the advantages of snapraid, that not all discs not to be up. for mergerffs I would remain a little bit more than 4G space free. you can merge all data disc to one union file system. if you do some operations with GUI, please check snapraid.conf later on. Maybe the second disc of the second parity is missing.
looks fine now. you should disable nextcloud during snapraid sync. It looks like, that someone updated a few files in nextcloud during the first sync operation. i am glad, that it now works for you.
yes, you have to do a snapraid -F sync see also FAQ. did not remember that I used a force option when I added a second parity on my array, I am getting old theses days...
yes, you have to do a snapraid -F sync see also FAQ. did not remember that I used a force option, I am getting old theses days...
you should be enable to continue with snapraid sync.
what was your last command and what was the output?
you understand me wrong, just enable the second parity and than continue with sync. Do not get back to single parity.
I realized, that in a previous screenshot, the array was fully synced already. It seems, that you were already partly syncing with two parity discs. Then the next step would be to enable the second parity again and just do a snapraid sync (since it was not finished). Maybe that caused the strange status output.
this or either do a snapraid sync and then snapraid scrub -p new. Your command will first create the hashes for all files in advance and sync the array, the later will sync the array (hashes are calculated in parallel to parity) and then a second run to check if all files are fine (parity and hash).
this or either do a snapraid sync and then snapraid srub -p new. Your command will first create the hashes for all files in advance and sync the array, the later will sync the array (hashes are calculated in parallel to parity) and then a second run to check if all files are fine (parity and hash).
your changes look right. Please include as code next time (some characters are special handled). Best next try would be to extend snapraid from a clean state. Comment out the second parity using a hash sign and check if then snapraid status looks fine again.
can you send me your snapraid.conf please.
from the output it looks like, that one parity is only 2.5TB. The old output (before changes), there were all 8TB. these are two shell commands (there is a space between df and -h). Please send the output of them.
was snapraid snced before you added the new parity? you should modify snapraid only if its fully synced. what does mount and df-h say? what does it say, if you comment out the second parity temporarely?
yes, this should do it. before syncing, you could try snapraid status once more to check if the 2nd parity has full size.
let me know if it works like expected.
let me know if it works like expected.
In the GUI you can add only single parity disc, not splitted parity. So you add one 4TB parity disc and save in GUI. Than you edit the snapraid configuration file /etc/snapraid.conf and add the second 4TB parity disc so that you have in total 8TB for the second parity. Nothing more needs to be done.
you have two times /srv/dev-disk-by-label-disque5 (used in both parities). else it looks fine.
that with the union file system will not work. Snapraid wil generate one parity file and a union file system in not able to split a file over two discs. OMV has no support in GUI for split parity, you will have to modify /etc/snapraid.conf manually (look at my example). Just add one 4TB as parity and then modify the configuration file by adding the second disc to create a split parity. This will never show up in GUI, because OMV never reads the configuration from snapraid, it uses its own XML file...
its just in the snapraid configuration file. here a part of mine as an example ( I have a similar configuration) parity /srv/dev-disk-by-label-Parity1/snapraid.parity 2-parity /srv/dev-disk-by-label-Parity2a/snapraid.2-parity,/srv/dev-disk-by-label-Parity2b/snapraid.2-parity thats all. Split partiy would allow event more discs for one parity...
Hi, if you use 34TB discs, you can use 2 of them to make one 8TB parity using the split parity functionality. This has the advantage, that there are less discs in parallel for parity calculation. you would have for data: 58TB, 16TB, 14TB => 7 data discs for parity: 18TB, 24TB (split parity), total of two parities if a 4TB disc dies, you can recover on a larger disc, since snapraid does not depend on physical disc information, just on file system storage. if a parity 4TB disc would die, you can copy...
your problem has nothing to do with SnapRaid, its an issue with mergerfs (or use of it). what commands are failing? which error code is returned? waht is the output of df?
so you are using mergerfs as union file system. SnapRaid has nothing to do how the discs are filled up since it only generates parity files over all data discs. to get information about mergerfs please install xattr by the following command line: apt-get install xattr then list the content of the pseudo configuration file of mergerfs xattr -l <mountpoint>/.mergerfs please post the output of this command.
wrong forum probably. which OMV version and which union file system?
Hi, today I had some problem with my SnapRAID configuration. I have 4 Data Disk and 2 Paritys (second one is splitted). When I started snapraid diff, I got the following errorÖ Loading state from /srv/dev-disk-by-label-ssd/snapraid.content... Decoding error in '/srv/dev-disk-by-label-ssd/snapraid.content' at offset 540 The file CRC is correct! Parity '2-parity' misses used file '1'! If you have removed it from the configuration file, please restore it I found out, that in the configuration file the...
You are right, that would be one possibility. But there is also another way to go: combine the two 2TB disks as splitted parity (2+2=4TB). Then all data would be on the trhe 4TB disks. This would result in one less data disk to protect (34TB vs. 24TB+2*2TB).
the parity disk must be as large as the largest data disk. since the smallest parity disk is 2TB, maximum data that can be protected is also 2TB, that why you have 2TB of waste (since this part of data cannot be protected).
all parity disk must be as large as the largest data drive. so you will have to use two 10TB disks as parity. One parity is assigned to exactly to one disk. With 8 data disks you should use 2 parity discs. This means, that your setup is probably 58TB+310TB data and 2*10TB parity.
disc identifiers should/mustbe unique. they are used by many programs to identify the disc. Problems can arrive from identical identifiers...
you should run "snapraid touch" command to make all time stamps to have non zero sub-second timestamp. SnapRAID uses timestamp to identify modified files. if you have some time, execute "snapraid scrub -p new" to check all files for consistency. this will take only first time as long as first sync. time by time you should run "snapraid -p 5" (scrub five percent of storage) to check file consistency. everything else looks fine.
usually the same size for the parity disc is fine. only if there is a very large number of small files, you may get into problem, that you cannot fill up the data discs completly. if you can remove one disc from a mirrored ZFS pool, then you are fine. You can then copy all data from the single ZFS mirror disc to the newly formatted (EXT4, XFS) data disc. Found this command. For parity disc check than the FAQ how to make less inodes (to save some space).
yes, it can. this feature is called split parity. Each parity in total must be equal or bigger than largest data drive.
yes, it can. this feature is called split parity. Each parity in total must be equal or bigger than largest data drive.
you could do it still with split parity: Scenario C (proposal) D: 10TB data P: 10TB data E: 2TB + F: 3TB + G: 5TB parity and that works already today (with much less complexity). The parity disk is less accessed than data disks.
also wenn ich diese Liste hier sehe, ist /nas nicht gemounted, und damit alles auf der root partition...
BRTFS ist nicht wirklich unterstützt von SnapRAID. Vor allem bringt es keinen Vorteil bei einer Parity Disk. Einfach für diese EXT4 oder XFS verwenden.
it just depends on the number 'n' of parity discs, since you can recover from 'n' failing discs (either parity or data) without data loss. This means, that you will need at least 'n+1' copy of the content file on different discs (since 'n' copies may have been liost due to failing discs). Due to there is some space overhead on the parity discs, it is adviced to use only data discs for content files.
seems that snapraid was interrupted bz an interrupt.
snapraid.content and snapraid.content.tmp should be excluded in the snapraid.conf configuration from SnapRAID. There files are internal to SnapRAID (and usually excluded by default). could you post your snapraid.conf file.
The problem is, that in case you have one failing disc, all modified files count as an additional change. In that case you would need two pariy discs to recover because there are two changes (failed disc and modified file). With single parity, you have lost data on the failed disc. SnapRAID is best for static data (since data is only fully protected at time of sync). Adding new files is no problem (they are unprotected), but modified files are dangerous. But it would be possible to revert to last...
The problem is, that in case you have one failing disc, all modified files count as an additional change. In that case you would need two pariy discs to recover because there are two changes (failed disc and modified file). With single parity, you have lost data on the failed disc. SnapRAID is best for static data (since data is only fully protected at time of sync). Adding new files is no problem (they are unprotected), but modified files are dangerous. But it would be possible to revert to last...
if you don´t want to change your config, you can pipe the output of "snapraid diff" to "grep -v ´ignore pattern´". this will be definitly slower than an additional config file since snapraid has to find all different files first.
if you don´t want to change your config, you can pipe the output of "snapraid diff" to "grep -v <ignore pattern="">". this will be definitly slower than an additional config file since snapraid has to find all different files first.</ignore>
with a data drive larger as parity drive, you will get only into problem, if the data drive is filled with more data than the parity drive can handle. If it is less, everything will be fine You may copy two 3 TB data disks to the new 6TB disk and then use splitted parity by two 3TB discs for single parity (the old parity disc and the copied away data disc).
The FAQ says, that for 21 data disks 3 parity disk is just enough. In that case you can recover from three failing disk (after successful sync). If there are changes, not everything may be recoverable.
I would say (from teh documentation), SnapRaid never deletes files, if you add an disk. You will have to recalc the parity. But you can try. Add the disc and make a "snapraid diff" and it should list you all files of the new data disc.
15+3 array is more secure than three sets 5+1. But it will take a really really long time to build and hopefully you do not have some USB problems (like 256 byte mismatch of same sector data). Do you really have 18* USB3 on your computer? Or do you use USB3 Hubs (one more point for errors) and big slowdown as you mentioned dramtic slowdown if all disks are read in parallel. I think, currently if you add a data disk, all data and parity disks will be read (for checking and calculating parity). Do...
making the 2TB oart of mergerfs is only problematic if there will get important data on it, since mergerfs can place new data anywhere (depending on setting). So please be careful selected the best mode for you. other option would be to have only 8TB data (on the 8TB drive) and use the 6TB+2TB disks as splitted parity. if you get a new 8TB then, it is just added (to have 16TB of protected data then).
you have to setup two SnapRAIDs with different configuration files. There is an option to specify a configuration file instead of the default one. For mergerfs you will need just two mount points in /etc/fstab. If you have one SnapRAID for all of your drives, it is very similar. Since there is no connection between SnapRAID and mergerfs, just create two mount points again pointing to the right discs.
if you want to use multiple disk as one big one, look for mergerfs. It will create one mount point over disks. It has many options where to store new files.