Menu

Adding a parity disk question

Help
fred jones
2023-04-30
2023-05-01
  • fred jones

    fred jones - 2023-04-30

    Hi All,
    I added a parity disk and ran sync with the "-F, --force-full" option. It said the time to finish was about 50 hours. I stopped it around 10% to update some SW which required a reboot. I later ran sync without the -F option and it said nothing to do. I checked and sure enough, the new parity disk had a parity file that was the same size as the other parity disks. How is that possible? It was only 10% done. Do you have to let sync -F finish w/o interruptions? That's a long time to go w/o something forcing a restart of the sync. I decided to run sync -F again, and it says it will take about 50 hours. I will let it run longer this time. Any ideas? From what I am seeing, you cannot interrupt a sync -F, is that true? But... sync status says no sync in progress and 93% of the array is not scrubbed.

    Thanks!

     
    • David

      David - 2023-05-01

      The sync "creates" the parity files but it fills them in during the sync. You need to let it run all the way through. Uncomment autosave and set it so the sync doesn't have to run all the way through in a single pass. Mine is set to 20000. After you see it set a save spot, you can break the sync and it will pick up at that point later.

       
      • fred jones

        fred jones - 2023-05-01

        Thanks! The "autosave" was my issue! I had it set way too high! 50000! That's 50 TB, correct?

        From the example configuration file -

        Automatically save the state when syncing after the specified amount of GB processed
        (uncomment to enable).

        Exactly what does that mean? Is that the number of bytes read from a data disk that have had their parity calculated and written to a parity disk?

        Example - say I have two drives, each 50000 GB in size and autosave is set to 50000. Does that mean that Snapraid would process "half" the array, autosave, then finish processing the rest?

        My preference would be to have autosave be a time value rather a size value. I would like to set it to 6 hours and have it autosave. I was using the speed that Snapraid reported to calculate the autosave number but that speed is highly variable. Perhaps we can add that to a snapRaid wish list. I bet there's a post somewhere that people can add to.

         
  • fred jones

    fred jones - 2023-05-01

    Well, this bites! sync -F failed at 27%, my fault though. But again snapraid status says all is good. Do I need to restart the sync -F and let it finish w/o stopping? It's just hard to keep this system up for 50+ hours straight! Here is the printout showing status says it's OK but sync -F stopped at 27%. I think I am just wearing out the array! Sorry about the non-mono font making things look messy! I dont see a way set the font! Any advice on whether I have to run this non-stop is appreciated!

    Using 216 MiB of memory for 32 cached blocks.
    Selecting...
    Syncing...
    27%, 42223795 MB, 859 MB/s, 182 stripe/s, CPU 12%, 33:25 ETA
    Autosaving...
    Saving state to C:/snapraid116/snapraid.content...
    Saving state to G:/snapraid.content...
    Saving state to Z:/snapraid.content...
    Saving state to V:/snapraid.content...
    Saving state to Z:/Mount_Drives/HGST_New/snapraid.content...
    Error writing the content file 'Z:/snapraid.content.tmp'. No space left on device [28/112].
    
    C:\snapraid116>snapraid status
    Self test...
    Loading state from C:/snapraid116/snapraid.content...
    Using 8887 MiB of memory for the file-system.
    SnapRAID status report:
    
       Files Fragmented Excess  Wasted  Used    Free  Use Name
                Files  Fragments  GB      GB      GB
       70913      78     583       -    5829     167  97% 6TB_1
       38884      24     297       -    5708     262  95% 6TB_2
       24279      20    2884       -    7338     136  98% 8TB_1
       40228      13     517    40.6    7895      65  99% 8TB_4
       15403      34     331    33.8    7736     231  97% 8TB_5
       47685      10      92    42.5    7878      79  99% 8TB_6
       18085      22     199       -    7519      87  98% 8TB_7
       11113      78     171    33.3    7756     211  97% 8TB_8
       63245      96     361    43.0    7756     200  97% 8TB_11
       11442      12      49    38.3    7757     209  97% 8TB_12
         109       0       0       -     743     256  74% 1TB_1
        1468       6       6       -     920      79  92% 1TB_3
        5561      16      50     0.6    7696     271  96% 8TB_8_bad
           0       0       0       -       0       -   -  2TB_1
        1769       4      11       -    1838      96  95% 2TB_2
        5330       5      99    24.0    7874      94  98% 8TB_13
       11991       3      14    33.7    7669     297  96% 8TB_14
       39347       8      79    -4.9    7814     143  98% 8TB_15_MDD1
        6778       8      86    32.8    7694     273  96% 8TB_16_MDD2
        6918       5      12   -30.9    7438     498  93% 8TB_17_HGST
        8122       3       4    32.3    7894      73  99% 8TB_19
        7389      12      23   -40.0    7414     514  93% disk25
     --------------------------------------------------------------------------
      436059     457    5868   355.0  138176    4252  97%
    
    
     48%|                                                                 o
        |                                                                 o
        |                                                                 o
        |                                                                 o
        |                                                                 o
        |                                                                 o
        |                                                                 o
     24%|                                                                 o
        |                                                                 o
        |                                                                 o
        |                                                                 o   o
        |  o                                                              o   o
        |  o    o    o                                                    o   o
        |  o    o    o   *                                                o   o
      0%|o_o*___o____oo***________________________________________________o__**
        37                    days ago of the last scrub/sync                 1
    
    The oldest block was scrubbed 37 days ago, the median 3, the newest 1.
    
    No sync is in progress.
    The 93% of the array is not scrubbed.
    No file has a zero sub-second timestamp.
    No rehash is in progress or needed.
    No error detected.
    
    C:\snapraid116>
    
     
  • fred jones

    fred jones - 2023-05-01

    Interesting! The font looks much better after posting!

     

Log in to post a comment.