well I seen that "Parallel disk scanning. " and hopped right to it and broke it
how ever in your defense I also just swapped out PSU today and my mainboard can be a little touchy so I will run some memory checks latter on
Thanks heaps for this update .. worked a treat only took 116 sec to scan all disks vs ~2min per drive
did syncing always have stripe/s ? new feature ?
I also noticed less CPU is being used while syncing , it seems to only be using 1 core out of the 24 for the syncing ?
and it seems to have separate threads for each drive writing to the log file ?
my overall bandwidth while syncing seems to be a lot slower than it used to be
I am unsure if its being limited to the write speed on the P disks as for some reason I am only getting 20MB/s write speed to each of the 4 P disks this used to be over 80MB/s per disk (128MB/s average) as its write speed 2~3GB/s sec total read 300~500MB/s total write
over the 24 Drives I have in each 4RU rack
running bonnie benchmarks I am still able to reach 128MB/s average R/W per drive so its not from me not reconnecting the power to the SAS Raid Card Properly and Multi-path is definalty still working
Ran into trouble during the 'make' process (see below). I followed directions from the Install readme file to extract the archive followed by './configure' and then 'make'.
Note: System is running Debian Bullseye
cmdline/unix.c: In function ‘filephy’:
cmdline/unix.c:579:40: warning: array subscript 0 is outside the bounds of an interior zero-length array ‘struct fiemap_extent[0]’ [-Wzero-length-bounds]
579 | uint32_t flags = fm.fiemap.fm_extents[0].fe_flags;
| ~~~~~~~~~~~~~~~~~~~~^~~
In file included from cmdline/portable.h:129,
from cmdline/unix.c:18:
/usr/include/linux/fiemap.h:37:23: note: while referencing ‘fm_extents’
37 | struct fiemap_extent fm_extents[0]; / array of mapped extents (out) /
| ^~~~~~~~~~
cmdline/unix.c:580:41: warning: array subscript 0 is outside the bounds of an interior zero-length array ‘struct fiemap_extent[0]’ [-Wzero-length-bounds]
580 | uint64_t offset = fm.fiemap.fm_extents[0].fe_physical;
| ~~~~~~~~~~~~~~~~~~~~^~~
In file included from cmdline/portable.h:129,
from cmdline/unix.c:18:
/usr/include/linux/fiemap.h:37:23: note: while referencing ‘fm_extents’
37 | struct fiemap_extent fm_extents[0]; / array of mapped extents (out) /
| ^~~~~~~~~~
Last edit: Charles Grisamore 2021-12-08
If you would like to refer to this comment somewhere else in this project, copy and paste the following link:
Sorry for the false alarm, I thought the warning messages were more serious and I didn't try the next step in the build process. Everything tested properly and working fine.
If you would like to refer to this comment somewhere else in this project, copy and paste the following link:
also I am trying to restore some single disks but its just wasting time on files that already exist
btrfs fs damaged beyond repair so imaged disks reformatted then used used btrfs restore to do what files I could and was trying to run "fix" over the top from a sync done previous week to help catch any missing file (downside being any updated files from the last week would be old ) but its just wasting days on files that exist
Disk 10 is pulled because "fix" kept hanging from I/O errors caused from a faulty btrfs
currently putting files back onto a fresh Disk 10 and will try again and get both Disk 2 and 10 done at same time
also how can I get "fix" to keep the original time stamps and permissions
if I do not run as root most files are not retrieved and if I run as root they all end up with root being the owner
"fix" does not seem to resume , or so I have to give it the same log file ?
SnapRAID v12.0 has been released at :
SnapRAID is a backup program for a disk array.
SnapRAID stores parity information in the disk array,
and it allows recovering from up to six disk failures.
This is the list of changes:
* Parallel disk scanning. It's always enabled but it doesn't cover the -m option
that still process disks sequentially.
well I seen that "Parallel disk scanning. " and hopped right to it and broke it
how ever in your defense I also just swapped out PSU today and my mainboard can be a little touchy so I will run some memory checks latter on
Last edit: Master CATZ 2021-12-08
hmm and downloaded the snapraid-12.0.tar.gz instead of using git and it worked
Using git, it's required to delete the older .o files and reconfigure the source. Something like:
make distclean
sh autogen.sh
./configure
make
Thanks heaps for this update .. worked a treat only took 116 sec to scan all disks vs ~2min per drive
did syncing always have stripe/s ? new feature ?
I also noticed less CPU is being used while syncing , it seems to only be using 1 core out of the 24 for the syncing ?
and it seems to have separate threads for each drive writing to the log file ?
my overall bandwidth while syncing seems to be a lot slower than it used to be
I am unsure if its being limited to the write speed on the P disks as for some reason I am only getting 20MB/s write speed to each of the 4 P disks this used to be over 80MB/s per disk (128MB/s average) as its write speed 2~3GB/s sec total read 300~500MB/s total write
over the 24 Drives I have in each 4RU rack
running bonnie benchmarks I am still able to reach 128MB/s average R/W per drive so its not from me not reconnecting the power to the SAS Raid Card Properly and Multi-path is definalty still working
with 12Gbs Raid card with Multi-path
Last edit: Master CATZ 2021-12-08
Ran into trouble during the 'make' process (see below). I followed directions from the Install readme file to extract the archive followed by './configure' and then 'make'.
Note: System is running Debian Bullseye
cmdline/unix.c: In function ‘filephy’:
cmdline/unix.c:579:40: warning: array subscript 0 is outside the bounds of an interior zero-length array ‘struct fiemap_extent[0]’ [-Wzero-length-bounds]
579 | uint32_t flags = fm.fiemap.fm_extents[0].fe_flags;
| ~~~~~~~~~~~~~~~~~~~~^~~
In file included from cmdline/portable.h:129,
from cmdline/unix.c:18:
/usr/include/linux/fiemap.h:37:23: note: while referencing ‘fm_extents’
37 | struct fiemap_extent fm_extents[0]; / array of mapped extents (out) /
| ^~~~~~~~~~
cmdline/unix.c:580:41: warning: array subscript 0 is outside the bounds of an interior zero-length array ‘struct fiemap_extent[0]’ [-Wzero-length-bounds]
580 | uint64_t offset = fm.fiemap.fm_extents[0].fe_physical;
| ~~~~~~~~~~~~~~~~~~~~^~~
In file included from cmdline/portable.h:129,
from cmdline/unix.c:18:
/usr/include/linux/fiemap.h:37:23: note: while referencing ‘fm_extents’
37 | struct fiemap_extent fm_extents[0]; / array of mapped extents (out) /
| ^~~~~~~~~~
Last edit: Charles Grisamore 2021-12-08
It's gcc 10 that it's more pedantic, and it doesn't like the Linux kernel headers. You can just ignore the warning.
I'll disable the warning in the next release.
Sorry for the false alarm, I thought the warning messages were more serious and I didn't try the next step in the build process. Everything tested properly and working fine.
"fix" does not seem to run in parallel
also I am trying to restore some single disks but its just wasting time on files that already exist
btrfs fs damaged beyond repair so imaged disks reformatted then used used btrfs restore to do what files I could and was trying to run "fix" over the top from a sync done previous week to help catch any missing file (downside being any updated files from the last week would be old ) but its just wasting days on files that exist
Disk 10 is pulled because "fix" kept hanging from I/O errors caused from a faulty btrfs
currently putting files back onto a fresh Disk 10 and will try again and get both Disk 2 and 10 done at same time
also how can I get "fix" to keep the original time stamps and permissions
if I do not run as root most files are not retrieved and if I run as root they all end up with root being the owner
"fix" does not seem to resume , or so I have to give it the same log file ?