Menu

ZFS conversion to SnapRAID

Help
Rene Dis
2019-09-04
2019-09-14
  • Rene Dis

    Rene Dis - 2019-09-04

    Hi, I love the idea of SnapRAID and I'm planning on running this with OMV4.
    But I'm really struggeling to setup a new envoirment with SnapRAID and need some advise.

    I currently have a OMV4 setup with:
    ZFS pool1: 2x 8TB mirrored (8TB - 1.5TB free)
    ZFS pool2: 2x 8TB mirrored (8TB - 1TB free)

    What can I expect with SnapRAID?
    SnapRAID pool1: 4x 8TB mirrored (24TB - 8TB parity)?

    Is it possible to keep my current data with a trick or do I need a temporary 16TB drive for the conversion?
    Also, is the 8TB parity enough? I read that the parity disk needs to be the same size or bigger then a single data disk. But when it gets fuller the overhead could be an issue?

     
  • Walter Tuppa

    Walter Tuppa - 2019-09-06

    usually the same size for the parity disc is fine. only if there is a very large number of small files, you may get into problem, that you cannot fill up the data discs completly.

    if you can remove one disc from a mirrored ZFS pool, then you are fine. You can then copy all data from the single ZFS mirror disc to the newly formatted (EXT4, XFS) data disc. Found this command.
    For parity disc check than the FAQ how to make less inodes (to save some space).

     
  • Rene Dis

    Rene Dis - 2019-09-09

    Thank you. I've deleted one disk at a time per zpool.

    Created new data disks and one parity. I'm about to do the first sync through SnapRAID

     
  • Rene Dis

    Rene Dis - 2019-09-13

    So all data is copied and first sync took about 6/7 hours.

    Result:
    Loading state from /srv/dev-disk-by-label-SDB/snapraid.content...
    Scanning disk SDB_-DATA...
    Scanning disk SDC
    -DATA...
    Scanning disk SDD
    -_DATA...
    Using 886 MiB of memory for the file-system.
    Initializing...
    Resizing...
    Saving state to /srv/dev-disk-by-label-SDB/snapraid.content...
    Saving state to /srv/dev-disk-by-label-SDC/snapraid.content...
    Saving state to /srv/dev-disk-by-label-SDD/snapraid.content...
    Verifying /srv/dev-disk-by-label-SDB/snapraid.content...
    Verifying /srv/dev-disk-by-label-SDC/snapraid.content...
    Verifying /srv/dev-disk-by-label-SDD/snapraid.content...
    Syncing...
    Using 32 MiB of memory for 32 cached blocks.
    0%, 0 MB 5%, 0 MB 14%, 117 MB 14%, 121 MB 20%, 193 MB 33%, 365 MB, 91 MB/s, CPU 48%, 0:00 ETA 46%, 533 MB, 106 MB/s, CPU 42%, 0:00 ETA 59%, 695 MB, 115 MB/s, CPU 38%, 0:00 ETA 71%, 853 MB, 121 MB/s, CPU 35%, 0:00 ETA 84%, 1017 MB, 127 MB/s, CPU 33%, 0:00 ETA 96%, 1177 MB, 132 MB/s, CPU 34%, 0:00 ETA 100% completed, 1218 MB accessed in 0:00

    SDB_-DATA 0% |
    SDC
    -DATA 13% | *
    SDD
    -_DATA 0% |
    parity 55% | ****
    raid 7% |

    hash 5% | **
    sched 17% |
    **
    misc 0% |
    |__________
    wait time (total, less is better)

    Everything OK
    Saving state to /srv/dev-disk-by-label-SDB/snapraid.content...
    Saving state to /srv/dev-disk-by-label-SDC/snapraid.content...
    Saving state to /srv/dev-disk-by-label-SDD/snapraid.content...
    Verifying /srv/dev-disk-by-label-SDB/snapraid.content...
    Verifying /srv/dev-disk-by-label-SDC/snapraid.content...
    Verifying /srv/dev-disk-by-label-SDD/snapraid.content...

    And a "snapraid status" shows:
    Self test...
    Loading state from /srv/dev-disk-by-label-SDB/snapraid.content...
    Using 879 MiB of memory for the file-system.
    SnapRAID status report:

    Files Fragmented Excess Wasted Used Free Use Name
    Files Fragments GB GB GB
    81917 0 0 39.7 3983 3989 50% SDB_-DATA
    9621 0 0 -0.1 4350 3639 54% SDC
    -DATA
    4328 0 0 -0.2 4019 3971 50% SDD
    -_DATA


    95866 0 0 39.7 12354 11600 51%

    99%|o
    |o
    |o
    |o
    |o
    |o
    |o
    49%|o
    |o
    |o
    |o
    |o
    |o
    |o
    0%|o______________o
    0 days ago of the last scrub/sync 0

    The oldest block was scrubbed 0 days ago, the median 0, the newest 0.

    No sync is in progress.
    The 100% of the array is not scrubbed.
    You have 2255 files with zero sub-second timestamp.
    Run the 'touch' command to set it to a not zero value.
    No rehash is in progress or needed.
    No error detected.

    I'm a bit lost in understanding this text/tables. Is there someone that can explain whats happening here? And if all is OK?

     
  • Walter Tuppa

    Walter Tuppa - 2019-09-14

    you should run "snapraid touch" command to make all time stamps to have non zero sub-second timestamp. SnapRAID uses timestamp to identify modified files.

    if you have some time, execute "snapraid scrub -p new" to check all files for consistency. this will take only first time as long as first sync.
    time by time you should run "snapraid -p 5" (scrub five percent of storage) to check file consistency.

    everything else looks fine.

     

Log in to post a comment.

MongoDB Logo MongoDB