NEWS: SnapRAID v11.0

  • Andrea Mazzoleni

    SnapRAID v11.0 has been released at :

    SnapRAID is a backup program for a disk array.

    SnapRAID stores parity information in the disk array,
    and it allows recovering from up to six disk failures.

    This is the list of changes:
    * Added support for splitting the parity in multiple partitions. You
    can now specify multiple files for a single parity. As soon a file
    cannot grow anymore, the next one starts growing.
    In the configuration file, just put more files in the same 'parity'
    line, separated by , (comma).
    Note that if this feature is used, the saved content file won't be
    read by older SnapRAID versions.
    In Windows, 256 MB are left free in each disk to avoid the warning
    about full disks.
    * Added a new 'hashsize' configuration option. It could be useful in
    systems with low memory, to reduce the memory usage.
    Note that if this feature is used, the saved content file won't be
    read by older SnapRAID versions.
    * In Linux added the missing support for Btrfs file-systems. Note that
    to have full support you need also the 'libblkid' library, otherwise
    you won't get the UUIDs.
    * In screen messages don't print the disk directory in file path. You
    can control the format with the test option:
    --test-fmt file|disk|path.
    * In Windows allows to use the escape char '^' to handle file patterns
    containing real characters matching the globbing '*?[]' ones. In Unix
    it was already possible to do the same escaping with '\'.
    * Added a new -R, --force-realloc option to reallocate all the
    parity information keeping the precomputed hash.
    This is the previous -F, --force-full that instead now maintains the
    same parity organization and just recomputes it.
    * Added test options for selecting the file advise mode to use:
    --test-io-advise-none for standard mode
    --test-io-advise-sequential advise sequential access (Linux/Windows)
    --test-io-advise-flush flush cache after every operation (Linux)
    --test-io-advise-flush-window flush cache every 8 MB (Linux)
    --test-io-advise-discard discard cache after every operation (Linux)
    --test-io-advise-discard-window discard cache every 8 MB (Linux)
    --test-io-advise-direct use direct/unbuffered mode (Linux/Windows)
    The new default mode is 'flush' in Linux (before it was 'sequential'),
    and 'sequential' in Windows (like before).
    * For Seagate SMR (Shingled Magnetic Recording) ignore the SMART
    attribute Command_Timeout 188 as not reliable.
    * Fixed running in Windows platforms that miss the RtlGenRandom()
    * Added the --test-io-cache=1 option to disable the multi-thread IO

  • rubylaser

    rubylaser - 2016-11-21

    Thanks again for the hard work Andrea! The split parity is a killer feature in this version, and has been working great for me through the betas.

  • Leifi Plomeros

    Leifi Plomeros - 2016-11-21

    Same here. Split parity function is great and has been working perfecly.
    Thank you!

    Another tiny but very appreciated option is --test-fmt disk (which maybe could be given a better alias in the future)

    • Gary Snow

      Gary Snow - 2016-11-29

      --test isn't in the manual, how is this used?

      • Leifi Plomeros

        Leifi Plomeros - 2016-11-30

        snapraid diff --test-fmt file (relative location inside the snapraid array without mountpoint)
        snapraid diff --test-fmt disk (diskname used in config + relative location)
        snapraid diff --test-fmt path (absolute/complete location, as seen by file system)

  • oidenburga

    oidenburga - 2016-11-23

    I noticed a Speed increase.
    When i scrubbed with v10 i was about 340-360 MB/S an now i am at 390-410MB/S


  • Leifi Plomeros

    Leifi Plomeros - 2016-11-30

    Added a new -R, --force-realloc option to reallocate all the parity information keeping the precomputed hash. This is the previous -F, --force-full that instead now maintains the same parity organization and just recomputes it.

    Would it be possible to expand this option to handle the following scenarios?

    A) Changed block size (verify old hash in parallell to calculation of new hash and give warning message if old hash did not match. Would make it a lot easier to change block size without worrying about silent corruption afterwards).

    B) Non-existing parity file (makes it possible to start using split parity also without worrying about silent corruption afterwards).

    Personally I am perfectly happy with my current parity setup and block size, so for me there is zero urgent need for any of these possibilites. But I assume especially B) would be very convenient for anyone using version 10 and wanting to use the split parity feature in v11.

    • Andrea Mazzoleni

      Hi Leifi,

      Changing block size would be difficult as the hash is computed on the block, and if the size of the block change, it won't be possible to compute both the hashes.

      For the not existing parity, isn't it already working ?


      • Leifi Plomeros

        Leifi Plomeros - 2016-12-05


        It never occurred to me that it could already be used to split parity so I have not tested it :)

        In theory, changed block size does not seem very complicated. At least not in the simplest form when increasing block size to a multiple of the old blocksize. Like 256 --> 1024.

        When you calculate the new block hash for a data block you would already have it in RAM. In above example it would mean that you could at this point also verify that each 256 kiB section of the new block matches the old hashes before calculating the new hash for the entire block.

        Changing block size to a non-multiple value of the old block size or a smaller blocksize would of course be a bit more complicated since you would need to implement some sort of buffer where multiple smaller or misalligned blocks would be remaining until enough of them are available to compare against the hash of older blocks.

        But most likely it would be very few users who are interested in this specific feature so it is probably not worth doing if it is not very simple to implement.

  • CybrSage

    CybrSage - 2016-12-05


  • WarmongerX

    WarmongerX - 2016-12-08

    Hmm, I thought V11 would fix the problems I had in V10 with the half speed issue, but it's same. Avg speed for V10 and 11 is 390MB/sec for sync and scrub routines, while 9.3 is 600-700MB/sec for the same thing (

    I'll check the LSI site for more updated drivers/firmware than when I posted about this 8 months ago.

  • WarmongerX

    WarmongerX - 2016-12-12
    • Added the --test-io-cache=1 option to disable the multi-thread IO

    I looked back at the previous thread that we were hashing this out and ran snapraid test-dry - --test-io-cache=1 for 10 minutes and it ran faster than before...averaging 760MB/s instead of 390MB/s over 10 minutes. Is this what I should be using in my daily script runs?

    if I wanted to run in monothread mode for my sync and scrubs. The syntax would be snapraid sync --test-io-cache=1 ?

    • Andrea Mazzoleni

      Hi WarmongerX,

      Yes. Just add the test-io-cache=1 argument in your sync and scrub commands.

      It's still unknown why your machine is slower with multithread but at least now you have a workaround.

      Something you can also try are the new --test-io-advise options to see if they make a difference.


  • WarmongerX

    WarmongerX - 2017-01-02

    Forgot to respond back and let you know that adding that command to my daily scripts worked. I'm syncing and scrubbing at v9 speeds again with v11 now. I wish I knew why it wasn't handling multithread correctly. I don't think my configuration is out of the norm for what I'm doing, but at least I'm able to use the new versions.


Log in to post a comment.

Get latest updates about Open Source Projects, Conferences and News.

Sign up for the SourceForge newsletter:

No, thanks