Hello. I had a drive go out on me so I pulled it out of my system and put a new drive in. I edited my snapraid.conf file, removed the entry for the bad drive (disk 10) and put in the path for the new drive for disk 10. I kicked off "snapraid fix" and the process stated at the beginning "UUID change for disk "d10"..." The process scanned disks d1 through d9 and then on d10 I get an error "Error opening directory... no such file or directory [2/2]."
This is my first time attempting a disk replacement so I could very well be doing it wrong...!
Here is my config:
Windows 8.1 running Snapraid 7.1
24 3TB drives in total - 21 data and 3 parity - all drives are NTFS and are mounted in NTFS folders
All data drives are the full size of the physical drive minus 10240M (10G).
All parity drives are the full size of the physical drive
The drive that failed was "C:-=SR=-\SRS3\DSKS\HDCN065\" and was replaced with "C:-=SR=-\SRS3\DSKS\HDCN122\". I can see the drive and can copy content to it (and then removed content).
Should I be replacing drive "d10", or just removed it and add the new drive at "d22"?
Any help would be greatly appreciated.
Thanks.
If you would like to refer to this comment somewhere else in this project, copy and paste the following link:
If I am understanding you correctly, it sounds like your snapraid.conf modifications were incorrect. All you should do is replace the path for the removed/bad drive with the path to the new replacement drive. You shouldn't "remove the entry for the bad drive". If you did only update the path of d10 (it's unclear, you should always copy and paste your before/after config file here), then make sure the path you specified exists and you have security permissions to access it.
Last edit: Quaraxkad 2015-06-18
If you would like to refer to this comment somewhere else in this project, copy and paste the following link:
Thanks for the reply. Hopefully this clears things up. Here is the Data Disks portion of my config file BEFORE any changes:
disk d1 C:-=SR=-\SRS3\DSKS\HDCN026\
disk d2 C:-=SR=-\SRS3\DSKS\HDCN027\
disk d3 C:-=SR=-\SRS3\DSKS\HDCN028\
disk d4 C:-=SR=-\SRS3\DSKS\HDCN030\
disk d5 C:-=SR=-\SRS3\DSKS\HDCN059\
disk d6 C:-=SR=-\SRS3\DSKS\HDCN060\
disk d7 C:-=SR=-\SRS3\DSKS\HDCN061\
disk d8 C:-=SR=-\SRS3\DSKS\HDCN062\
disk d9 C:-=SR=-\SRS3\DSKS\HDCN063\
disk d10 C:-=SR=-\SRS3\DSKS\HDCN065\
disk d11 C:-=SR=-\SRS3\DSKS\HDCN094\
disk d12 C:-=SR=-\SRS3\DSKS\HDCN095\
disk d13 C:-=SR=-\SRS3\DSKS\HDCN101\
disk d14 C:-=SR=-\SRS3\DSKS\HDCN102\
disk d15 C:-=SR=-\SRS3\DSKS\HDCN103\
disk d16 C:-=SR=-\SRS3\DSKS\HDCN104\
disk d17 C:-=SR=-\SRS3\DSKS\HDCN105\
disk d18 C:-=SR=-\SRS3\DSKS\HDCN106\
disk d19 C:-=SR=-\SRS3\DSKS\HDCN107\
disk d20 C:-=SR=-\SRS3\DSKS\HDCN108\
disk d21 C:-=SR=-\SRS3\DSKS\HDCN120\
Here is the config file AFTER changing the "disk d10" entry to point to the NTFS folder for the new drive:
disk d1 C:-=SR=-\SRS3\DSKS\HDCN026\
disk d2 C:-=SR=-\SRS3\DSKS\HDCN027\
disk d3 C:-=SR=-\SRS3\DSKS\HDCN028\
disk d4 C:-=SR=-\SRS3\DSKS\HDCN030\
disk d5 C:-=SR=-\SRS3\DSKS\HDCN059\
disk d6 C:-=SR=-\SRS3\DSKS\HDCN060\
disk d7 C:-=SR=-\SRS3\DSKS\HDCN061\
disk d8 C:-=SR=-\SRS3\DSKS\HDCN062\
disk d9 C:-=SR=-\SRS3\DSKS\HDCN063\
disk d10 C:-=SR=-\SRS3\DSKS\HDCN122\
disk d11 C:-=SR=-\SRS3\DSKS\HDCN094\
disk d12 C:-=SR=-\SRS3\DSKS\HDCN095\
disk d13 C:-=SR=-\SRS3\DSKS\HDCN101\
disk d14 C:-=SR=-\SRS3\DSKS\HDCN102\
disk d15 C:-=SR=-\SRS3\DSKS\HDCN103\
disk d16 C:-=SR=-\SRS3\DSKS\HDCN104\
disk d17 C:-=SR=-\SRS3\DSKS\HDCN105\
disk d18 C:-=SR=-\SRS3\DSKS\HDCN106\
disk d19 C:-=SR=-\SRS3\DSKS\HDCN107\
disk d20 C:-=SR=-\SRS3\DSKS\HDCN108\
disk d21 C:-=SR=-\SRS3\DSKS\HDCN120\
(note: extra line breaks are not in the config file, but for formatting as just cut and paste was combining all the lines...)
The only difference is that I change the "disk d10" line from:
disk d10 C:-=SR=-\SRS3\DSKS\HDCN065\
to
disk d10 C:-=SR=-\SRS3\DSKS\HDCN122\
My login to Windows is a domain account with Domain Admin rights. I opened up the command prompt with "Run as Administrator" (this is what I have always done for sync, scrub and fix operations)
I can see the drive in Windows Explorer (more precisely I can see the NTFS folder) and I have confirmed that I can write to the drive/folder.
Thanks for looking and happy to try whatever is needed.
Last edit: Vorpel 2015-06-18
If you would like to refer to this comment somewhere else in this project, copy and paste the following link:
Is the new disk empty? Check the NTFS security settings for the folder mount, the drive (accessible through diskmgmt), and any contents (if there are any). Does the SnapRAID process continue after the error or abort? Enable verbose logging with -v, and write a log file with -l something.log. Run the process again and you will get more information on where the error occurs.
If you would like to refer to this comment somewhere else in this project, copy and paste the following link:
The new disk (HDCN122) is empty. It has 266MB used and 2784.13GB free.
I checked the NTFS security settings and they exactly match disk d9 (HDCN063)
The snapraid fix process fails on the error: "Error opening directory 'C:/-=SR=-/SRS3/DSKS/HDCN122/, no such file or directory [2/2]."
I reran it with verbose and logging on - once the process got to disk d10 it kicked the same error. Here is the content of the log file:
That is the full log from the command. I have 6 content files, so it sure seems like I'm not getting the full output. I would assume that the output is buffered before being logged - could the failure be just aborting the command so the remainder of the buffer isn't written out?
If you would like to refer to this comment somewhere else in this project, copy and paste the following link:
I have tried rebooting the system - no change.
I have removed all partitions and re-setup - no change.
I have changed ownership to my logged in account (local admin, domain admin) - no change.
When I try and change user permissions, I get an error: "An error occurred while applying security information to: \?\Volume{big long number}\ Failed to enumerate objects in the container. Access is denied."
I'm at a loss on what to do.
Question: Is it correct to replace the failed drive in the config file (drive d10 in this case) with the new disk?
Thanks.
If you would like to refer to this comment somewhere else in this project, copy and paste the following link:
Have you read the manual? http://snapraid.sourceforge.net/manual.html
Section "4.4 Recovering" should be applicable. Please read all steps as the first couple steps can be repeated to reduce number non-recoverable but then you have a point-of-no-return. Also note the comment to stop any changes or activities on the disks until recover process is completed.
But before adding the new drive you need to ensure it's working correctly so fix the error "An error occurred while applying security information to: \?\Volume{big long number}\ Failed to enumerate objects in the container. Access is denied.".
BUT, again, first ensure no changes of the snapraid disks and files or other scheduled snapraid commands are performed until the "snapraid" is recovered.
At this point I would try reformating the new disk and assign a drive letter to it and change the config to:
disk d10 T:\ (Replace T with new drive letter)
By doing that you can rule out any any issue realated to the mountpoint and you can easily test that you have access to the disk from the command prompt before you run snapraid fix.
C:\Users\Vorpel>cd\
C:>cd Snapraid
C:\Snapraid>mkdir T:\test
C:\Snapraid>copy snapraid.conf T:\test\
C:\Snapraid>type T:\test\snapraid.conf
The above should result in a copy of your snapraid.conf file being placed in T:\test\ and the content of it printed on screen...
If you would like to refer to this comment somewhere else in this project, copy and paste the following link:
Thanks for the replies. I had tried reformatting the new drive, but that didn't make a difference. I followed the instructions here (http://www.petenetlive.com/KB/Article/0000887.htm) but still getting the same issue.
I did add a drive letter for d10 and that worked! I'm not sure what the issue is with the NTFS folder - I have 71 others that don't have any issues in Snapraid. I could not find any differences comparing the new one to several of the existing snapraid drives.
The fix is working and hopefully I will have a recovered drive later today.
I appreciate all the help.
Last edit: Vorpel 2015-06-19
If you would like to refer to this comment somewhere else in this project, copy and paste the following link:
So... In the process of disk d10 being fixed I lost another drive in the same Snapraid array - d13. I do have triple parity.
My question is - once I replace drive d13, will Snapraid fix both at the same time (another post indicated that, but they lost a data and a parity in a 2 parity setup) or do you recover/fix 1 drive, then repeat for the next drive?
If fix will do both drives, do I specify both with the -d option like this:
snapraid -d d10 -d d13 -l logfile.log fix
Or will: snapraid -l logfile.log fix work?
I really didn't find much on multiple drive failures.
Thanks
If you would like to refer to this comment somewhere else in this project, copy and paste the following link:
Without having had your issue nor tested it I would suggest following:
test fix one drive at a time according to the manual but stop before doing the sync (when you can't redo the recovery operation)
when both drives have no or as few unrecoverable as possible you can sync the "raid" which will put it in to "fully functional" state (but note this is non-revocable command and your no-point-of-return)
Comment:
If the drives are not completely dead I think it was suggested that you should copy as many files as possible to the new drive before fixing it as it will reduce the recovery time (no need to recover files with valid checksums). Unfortenately I do not have the reference why you need to find the original source yourself.
"lost another drive": do you have enterprise harddisks and/or "time-limited error recovery" activated? If running non-hardware RAID such as "snapraid" it is much better utilizing the harddisk internal block recovery function as it will either correct bad blocks itself or return a corrupt block which is taken care of by snapraid's check/scrub and fix commands.
/X
If you would like to refer to this comment somewhere else in this project, copy and paste the following link:
Yes, the directory did exist. I was able to copy a directory of files to it without issue (then I removed them). As I posted earlier, I was getting some odd behavior with trying to change permissions, so I don't know what was going on.
When I replaced the 2nd bad hard drive (d13) I set it up the way I have in the past - NTFS folder to c:-=SR=-\SRS3\DSKS\HDCN123
When I tried the fix, it gave the same error "error opening directory...". To expedite the fix I also gave the drive a letter (K:).
The fix is complete - 4 hours 45 minutes. Wow - that was awesome!!!
I am running a check command now on d10 and will also run it on d13.
I would like to go back to using the NTFS folders instead of drive letters as I just don't have many drive letters available. Can I switch this back? I will give it a try once the check commands are done.
Thanks!
If you would like to refer to this comment somewhere else in this project, copy and paste the following link:
If fix will do both drives, do I specify both with the -d option like this:
snapraid -d d10 -d d13 -l logfile.log fix
Yes, that is correct.
test fix one drive at a time according to the manual but stop before doing the sync (when you can't redo the recovery operation)
I would not recommend running fix once for each disk because it will take twice as long (at least). If you do only "fix -d d10", SnapRAID still has to calculate the additional missing blocks where d13 used to be, it just won't write that data back to the new drive. Then you run "fix -d d13" and it just has to do all those calculations again. If you do both at once then it only does the work one time.
Last edit: Quaraxkad 2015-06-20
If you would like to refer to this comment somewhere else in this project, copy and paste the following link:
Update: The check command was successful for both drives (d10 and d13). There was 1 file on d10 that was damaged, but I didn't care about that file so I ran a sync and it came back immediately that everything was OK. I then renamed drives d10 and d13 to use the NTFS folder instead of the drive letters and ran sync - no issues.
An added note: I use Stablebit Drivepool for disk pooling and I wasn't sure how the recovered drives would be handled. Thankfully Drivepool puts the pooled disk into read only mode when it detects a missing drive. I was able to remove the missing disk d13 from the pool (disk d10 was already gone though I didn't remove it). The two new replacement drives somehow ended up in a new Drivepool disk pool. In the process of removing the 2 drives from this new pool, drive d10's contents were wiped (by the tool). I am running a fix now and believe I will get everything back in order.
I now know that I should have checked the status of the two new replacement drives in Drivepool before doing the fix. This is all great learning as I'm sure I will be going through this many more times in the months/years to come.
A big thanks to xad, Leifi, Andrea and Quaraxkad for all your help!
If you would like to refer to this comment somewhere else in this project, copy and paste the following link:
I had this exact issue recently while trying to restore to a brand new, newly formatted, completely empty drive mounted in an NTFS folder. I'm not 100% sure if I had already upgraded from 7.1 to 8.1, but I believe so. This is on a Win 7 Pro 64-bit system. I was about to come here for help, but on a hunch I created a file (empty text file) on the new drive. Worked like a charm after that. So I guess it could be possible that the fix command has a problem seeing new empty drives mounted in a folder. I also use DrivePool, but I don't think that could be the issue since the new drive hadn't yet been added to the pool. But I'm not an expert.
If you would like to refer to this comment somewhere else in this project, copy and paste the following link:
Thanks for the post - glad to know I'm not the only one that it happened to. After the fix/check/sync I was able to switch back to the NTFS folder for the drive and it's working.
I'm sure I'll be getting more practice at this soon!
If you would like to refer to this comment somewhere else in this project, copy and paste the following link:
Hello. I had a drive go out on me so I pulled it out of my system and put a new drive in. I edited my snapraid.conf file, removed the entry for the bad drive (disk 10) and put in the path for the new drive for disk 10. I kicked off "snapraid fix" and the process stated at the beginning "UUID change for disk "d10"..." The process scanned disks d1 through d9 and then on d10 I get an error "Error opening directory... no such file or directory [2/2]."
This is my first time attempting a disk replacement so I could very well be doing it wrong...!
Here is my config:
Windows 8.1 running Snapraid 7.1
24 3TB drives in total - 21 data and 3 parity - all drives are NTFS and are mounted in NTFS folders
All data drives are the full size of the physical drive minus 10240M (10G).
All parity drives are the full size of the physical drive
The drive that failed was "C:-=SR=-\SRS3\DSKS\HDCN065\" and was replaced with "C:-=SR=-\SRS3\DSKS\HDCN122\". I can see the drive and can copy content to it (and then removed content).
Should I be replacing drive "d10", or just removed it and add the new drive at "d22"?
Any help would be greatly appreciated.
Thanks.
If I am understanding you correctly, it sounds like your snapraid.conf modifications were incorrect. All you should do is replace the path for the removed/bad drive with the path to the new replacement drive. You shouldn't "remove the entry for the bad drive". If you did only update the path of d10 (it's unclear, you should always copy and paste your before/after config file here), then make sure the path you specified exists and you have security permissions to access it.
Last edit: Quaraxkad 2015-06-18
Thanks for the reply. Hopefully this clears things up. Here is the Data Disks portion of my config file BEFORE any changes:
disk d1 C:-=SR=-\SRS3\DSKS\HDCN026\
disk d2 C:-=SR=-\SRS3\DSKS\HDCN027\
disk d3 C:-=SR=-\SRS3\DSKS\HDCN028\
disk d4 C:-=SR=-\SRS3\DSKS\HDCN030\
disk d5 C:-=SR=-\SRS3\DSKS\HDCN059\
disk d6 C:-=SR=-\SRS3\DSKS\HDCN060\
disk d7 C:-=SR=-\SRS3\DSKS\HDCN061\
disk d8 C:-=SR=-\SRS3\DSKS\HDCN062\
disk d9 C:-=SR=-\SRS3\DSKS\HDCN063\
disk d10 C:-=SR=-\SRS3\DSKS\HDCN065\
disk d11 C:-=SR=-\SRS3\DSKS\HDCN094\
disk d12 C:-=SR=-\SRS3\DSKS\HDCN095\
disk d13 C:-=SR=-\SRS3\DSKS\HDCN101\
disk d14 C:-=SR=-\SRS3\DSKS\HDCN102\
disk d15 C:-=SR=-\SRS3\DSKS\HDCN103\
disk d16 C:-=SR=-\SRS3\DSKS\HDCN104\
disk d17 C:-=SR=-\SRS3\DSKS\HDCN105\
disk d18 C:-=SR=-\SRS3\DSKS\HDCN106\
disk d19 C:-=SR=-\SRS3\DSKS\HDCN107\
disk d20 C:-=SR=-\SRS3\DSKS\HDCN108\
disk d21 C:-=SR=-\SRS3\DSKS\HDCN120\
Here is the config file AFTER changing the "disk d10" entry to point to the NTFS folder for the new drive:
disk d1 C:-=SR=-\SRS3\DSKS\HDCN026\
disk d2 C:-=SR=-\SRS3\DSKS\HDCN027\
disk d3 C:-=SR=-\SRS3\DSKS\HDCN028\
disk d4 C:-=SR=-\SRS3\DSKS\HDCN030\
disk d5 C:-=SR=-\SRS3\DSKS\HDCN059\
disk d6 C:-=SR=-\SRS3\DSKS\HDCN060\
disk d7 C:-=SR=-\SRS3\DSKS\HDCN061\
disk d8 C:-=SR=-\SRS3\DSKS\HDCN062\
disk d9 C:-=SR=-\SRS3\DSKS\HDCN063\
disk d10 C:-=SR=-\SRS3\DSKS\HDCN122\
disk d11 C:-=SR=-\SRS3\DSKS\HDCN094\
disk d12 C:-=SR=-\SRS3\DSKS\HDCN095\
disk d13 C:-=SR=-\SRS3\DSKS\HDCN101\
disk d14 C:-=SR=-\SRS3\DSKS\HDCN102\
disk d15 C:-=SR=-\SRS3\DSKS\HDCN103\
disk d16 C:-=SR=-\SRS3\DSKS\HDCN104\
disk d17 C:-=SR=-\SRS3\DSKS\HDCN105\
disk d18 C:-=SR=-\SRS3\DSKS\HDCN106\
disk d19 C:-=SR=-\SRS3\DSKS\HDCN107\
disk d20 C:-=SR=-\SRS3\DSKS\HDCN108\
disk d21 C:-=SR=-\SRS3\DSKS\HDCN120\
(note: extra line breaks are not in the config file, but for formatting as just cut and paste was combining all the lines...)
The only difference is that I change the "disk d10" line from:
disk d10 C:-=SR=-\SRS3\DSKS\HDCN065\
to
disk d10 C:-=SR=-\SRS3\DSKS\HDCN122\
My login to Windows is a domain account with Domain Admin rights. I opened up the command prompt with "Run as Administrator" (this is what I have always done for sync, scrub and fix operations)
I can see the drive in Windows Explorer (more precisely I can see the NTFS folder) and I have confirmed that I can write to the drive/folder.
Thanks for looking and happy to try whatever is needed.
Last edit: Vorpel 2015-06-18
Is the new disk empty? Check the NTFS security settings for the folder mount, the drive (accessible through diskmgmt), and any contents (if there are any). Does the SnapRAID process continue after the error or abort? Enable verbose logging with -v, and write a log file with -l something.log. Run the process again and you will get more information on where the error occurs.
Thank you so much for the help.
The new disk (HDCN122) is empty. It has 266MB used and 2784.13GB free.
I checked the NTFS security settings and they exactly match disk d9 (HDCN063)
The snapraid fix process fails on the error: "Error opening directory 'C:/-=SR=-/SRS3/DSKS/HDCN122/, no such file or directory [2/2]."
I reran it with verbose and logging on - once the process got to disk d10 it kicked the same error. Here is the content of the log file:
version:7.1
unixtime:1434642857
time:2015-06-18 09:54:17
command:fix
argv:0:snapraid
argv:1:-v
argv:2:-l
argv:3:c:\apps\SR3_Fix_18Jun15.txt
argv:4:fix
selftest:
conf:snapraid.conf
blocksize:524288
disk:d1:C:/-=SR=-/SRS3/DSKS/HDCN026/
disk:d2:C:/-=SR=-/SRS3/DSKS/HDCN027/
disk:d3:C:/-=SR=-/SRS3/DSKS/HDCN028/
disk:d4:C:/-=SR=-/SRS3/DSKS/HDCN030/
disk:d5:C:/-=SR=-/SRS3/DSKS/HDCN059/
disk:d6:C:/-=SR=-/SRS3/DSKS/HDCN060/
disk:d7:C:/-=SR=-/SRS3/DSKS/HDCN061/
disk:d8:C:/-=SR=-/SRS3/DSKS/HDCN062/
disk:d9:C:/-=SR=-/SRS3/DSKS/HDCN063/
disk:d10:C:/-=SR=-/SRS3/DSKS/HDCN122/
disk:d11:C:/-=SR=-/SRS3/DSKS/HDCN094/
disk:d12:C:/-=SR=-/SRS3/DSKS/HDCN095/
disk:d13:C:/-=SR=-/SRS3/DSKS/HDCN101/
disk:d14:C:/-=SR=-/SRS3/DSKS/HDCN102/
disk:d15:C:/-=SR=-/SRS3/DSKS/HDCN103/
disk:d16:C:/-=SR=-/SRS3/DSKS/HDCN104/
disk:d17:C:/-=SR=-/SRS3/DSKS/HDCN105/
disk:d18:C:/-=SR=-/SRS3/DSKS/HDCN106/
disk:d19:C:/-=SR=-/SRS3/DSKS/HDCN107/
disk:d20:C:/-=SR=-/SRS3/DSKS/HDCN108/
disk:d21:C:/-=SR=-/SRS3/DSKS/HDCN120/
mode:par3
parity:C:/-=SR=-/SRS3/SRPD/HDCN025/snapraid.parity
2-parity:D:/-=SR=-/SRS3/SRPD/HDCN058/snapraid.2-parity
3-parity:E:/-=SR=-/SRS3/SRPD/HDCN119/snapraid.3-parity
content:C:/-=SR=-/SRS3/Config/snapraid.content
Is that the entire log file after running fix? It's incomplete...
Just in case I re-ran the command: "snapraid -v -l sr_fix_18Jun15_tr2.log fix"
Here is the full content of the file:
version:7.1
unixtime:1434655964
time:2015-06-18 13:32:44
command:fix
argv:0:snapraid
argv:1:-v
argv:2:-l
argv:3:sr_fix_18Jun15_tr2.log
argv:4:fix
selftest:
conf:snapraid.conf
blocksize:524288
disk:d1:C:/-=SR=-/SRS3/DSKS/HDCN026/
disk:d2:C:/-=SR=-/SRS3/DSKS/HDCN027/
disk:d3:C:/-=SR=-/SRS3/DSKS/HDCN028/
disk:d4:C:/-=SR=-/SRS3/DSKS/HDCN030/
disk:d5:C:/-=SR=-/SRS3/DSKS/HDCN059/
disk:d6:C:/-=SR=-/SRS3/DSKS/HDCN060/
disk:d7:C:/-=SR=-/SRS3/DSKS/HDCN061/
disk:d8:C:/-=SR=-/SRS3/DSKS/HDCN062/
disk:d9:C:/-=SR=-/SRS3/DSKS/HDCN063/
disk:d10:C:/-=SR=-/SRS3/DSKS/HDCN122/
disk:d11:C:/-=SR=-/SRS3/DSKS/HDCN094/
disk:d12:C:/-=SR=-/SRS3/DSKS/HDCN095/
disk:d13:C:/-=SR=-/SRS3/DSKS/HDCN101/
disk:d14:C:/-=SR=-/SRS3/DSKS/HDCN102/
disk:d15:C:/-=SR=-/SRS3/DSKS/HDCN103/
disk:d16:C:/-=SR=-/SRS3/DSKS/HDCN104/
disk:d17:C:/-=SR=-/SRS3/DSKS/HDCN105/
disk:d18:C:/-=SR=-/SRS3/DSKS/HDCN106/
disk:d19:C:/-=SR=-/SRS3/DSKS/HDCN107/
disk:d20:C:/-=SR=-/SRS3/DSKS/HDCN108/
disk:d21:C:/-=SR=-/SRS3/DSKS/HDCN120/
mode:par3
parity:C:/-=SR=-/SRS3/SRPD/HDCN025/snapraid.parity
2-parity:D:/-=SR=-/SRS3/SRPD/HDCN058/snapraid.2-parity
3-parity:E:/-=SR=-/SRS3/SRPD/HDCN119/snapraid.3-parity
content:C:/-=SR=-/SRS3/Config/snapraid.content
Oops - posted before I finished writing...
That is the full log from the command. I have 6 content files, so it sure seems like I'm not getting the full output. I would assume that the output is buffered before being logged - could the failure be just aborting the command so the remainder of the buffer isn't written out?
I have tried rebooting the system - no change.
I have removed all partitions and re-setup - no change.
I have changed ownership to my logged in account (local admin, domain admin) - no change.
When I try and change user permissions, I get an error: "An error occurred while applying security information to: \?\Volume{big long number}\ Failed to enumerate objects in the container. Access is denied."
I'm at a loss on what to do.
Question: Is it correct to replace the failed drive in the config file (drive d10 in this case) with the new disk?
Thanks.
Have you read the manual? http://snapraid.sourceforge.net/manual.html
Section "4.4 Recovering" should be applicable. Please read all steps as the first couple steps can be repeated to reduce number non-recoverable but then you have a point-of-no-return. Also note the comment to stop any changes or activities on the disks until recover process is completed.
But before adding the new drive you need to ensure it's working correctly so fix the error "An error occurred while applying security information to: \?\Volume{big long number}\ Failed to enumerate objects in the container. Access is denied.".
BUT, again, first ensure no changes of the snapraid disks and files or other scheduled snapraid commands are performed until the "snapraid" is recovered.
/X
edit: For the windows error, have you checked google?
http://www.petenetlive.com/KB/Article/0000887.htm
http://www.richardawilson.com/2013/12/an-error-occurred-while-applying.html
Last edit: xad 2015-06-19
At this point I would try reformating the new disk and assign a drive letter to it and change the config to:
disk d10 T:\ (Replace T with new drive letter)
By doing that you can rule out any any issue realated to the mountpoint and you can easily test that you have access to the disk from the command prompt before you run snapraid fix.
C:\Users\Vorpel>cd\
C:>cd Snapraid
C:\Snapraid>mkdir T:\test
C:\Snapraid>copy snapraid.conf T:\test\
C:\Snapraid>type T:\test\snapraid.conf
The above should result in a copy of your snapraid.conf file being placed in T:\test\ and the content of it printed on screen...
Thanks for the replies. I had tried reformatting the new drive, but that didn't make a difference. I followed the instructions here (http://www.petenetlive.com/KB/Article/0000887.htm) but still getting the same issue.
I did add a drive letter for d10 and that worked! I'm not sure what the issue is with the NTFS folder - I have 71 others that don't have any issues in Snapraid. I could not find any differences comparing the new one to several of the existing snapraid drives.
The fix is working and hopefully I will have a recovered drive later today.
I appreciate all the help.
Last edit: Vorpel 2015-06-19
So... In the process of disk d10 being fixed I lost another drive in the same Snapraid array - d13. I do have triple parity.
My question is - once I replace drive d13, will Snapraid fix both at the same time (another post indicated that, but they lost a data and a parity in a 2 parity setup) or do you recover/fix 1 drive, then repeat for the next drive?
If fix will do both drives, do I specify both with the -d option like this:
snapraid -d d10 -d d13 -l logfile.log fix
Or will: snapraid -l logfile.log fix work?
I really didn't find much on multiple drive failures.
Thanks
Vorpel,
Without having had your issue nor tested it I would suggest following:
Comment:
/X
xad - thanks for the quick reply. I will be picking up a replacement drive this morning and will start the recovery after that.
I'll post with my findings
Hi Vorpel,
When SnapRAID prints this message:
Does this directory exists ?
The mount point directory should exists, and then SnapRAID can fill it with content. If not, just create it manually.
Ciao,
Andrea
Andrea,
Yes, the directory did exist. I was able to copy a directory of files to it without issue (then I removed them). As I posted earlier, I was getting some odd behavior with trying to change permissions, so I don't know what was going on.
When I replaced the 2nd bad hard drive (d13) I set it up the way I have in the past - NTFS folder to c:-=SR=-\SRS3\DSKS\HDCN123
When I tried the fix, it gave the same error "error opening directory...". To expedite the fix I also gave the drive a letter (K:).
The fix is complete - 4 hours 45 minutes. Wow - that was awesome!!!
I am running a check command now on d10 and will also run it on d13.
I would like to go back to using the NTFS folders instead of drive letters as I just don't have many drive letters available. Can I switch this back? I will give it a try once the check commands are done.
Thanks!
Yes, that is correct.
I would not recommend running fix once for each disk because it will take twice as long (at least). If you do only "fix -d d10", SnapRAID still has to calculate the additional missing blocks where d13 used to be, it just won't write that data back to the new drive. Then you run "fix -d d13" and it just has to do all those calculations again. If you do both at once then it only does the work one time.
Last edit: Quaraxkad 2015-06-20
Thanks for the clarification. The replacement drive is in and fix is running "snapraid -d d10 -d d13 -l log.log fix"
Hopefully no more drives will fail until the fix is completed...!
Update: The check command was successful for both drives (d10 and d13). There was 1 file on d10 that was damaged, but I didn't care about that file so I ran a sync and it came back immediately that everything was OK. I then renamed drives d10 and d13 to use the NTFS folder instead of the drive letters and ran sync - no issues.
An added note: I use Stablebit Drivepool for disk pooling and I wasn't sure how the recovered drives would be handled. Thankfully Drivepool puts the pooled disk into read only mode when it detects a missing drive. I was able to remove the missing disk d13 from the pool (disk d10 was already gone though I didn't remove it). The two new replacement drives somehow ended up in a new Drivepool disk pool. In the process of removing the 2 drives from this new pool, drive d10's contents were wiped (by the tool). I am running a fix now and believe I will get everything back in order.
I now know that I should have checked the status of the two new replacement drives in Drivepool before doing the fix. This is all great learning as I'm sure I will be going through this many more times in the months/years to come.
A big thanks to xad, Leifi, Andrea and Quaraxkad for all your help!
I had this exact issue recently while trying to restore to a brand new, newly formatted, completely empty drive mounted in an NTFS folder. I'm not 100% sure if I had already upgraded from 7.1 to 8.1, but I believe so. This is on a Win 7 Pro 64-bit system. I was about to come here for help, but on a hunch I created a file (empty text file) on the new drive. Worked like a charm after that. So I guess it could be possible that the fix command has a problem seeing new empty drives mounted in a folder. I also use DrivePool, but I don't think that could be the issue since the new drive hadn't yet been added to the pool. But I'm not an expert.
@Quigon
Thanks for the post - glad to know I'm not the only one that it happened to. After the fix/check/sync I was able to switch back to the NTFS folder for the drive and it's working.
I'm sure I'll be getting more practice at this soon!