Menu

SnapRAID and excessive C drive usage

Help
DCMAKER
2018-04-30
2018-05-11
  • DCMAKER

    DCMAKER - 2018-04-30

    So I posted in the past on this and I have virtual memory turned to 0 yet everytime I run a sync command it is constantly writing to my C drive and murdering my endurance.

    I am at 70TB of write on my C drive now. I have gone up about 20TB fo writes in the last few months. See graphs below.

    See the linked screen shots of wear. This is directly related to SnapRAID syncs. I have virtual memory turned to 0 so it isn't windows. Does Snapraid program being on C drive cause a ton of rights on C drive? Seems idiotic.
    I also have 64GB of ECC RAM. I went from 15GB to 58GB of RAM used while syncing and my C drive was running at 200-250MBps read/write the entire 4-5 hour sync.

    I also included my SnapRAID config file for reference.
    I also included screen shot showing the sync and writing to C. This screen shot I am only using 30GB/64GB of RAM
    https://www.dropbox.com/sh/9hvtxr0ldtnbuwd/AADuUJ5WeaKfD5Xv3NE5q3qpa?dl=0

     

    Last edit: DCMAKER 2018-04-30
  • John

    John - 2018-04-30

    From what I see you just have one content file on C: so if you don't have any swapping that should be your issue. It's very easy to use "one less" content file; just do a sync, make sure all files are updated (you can even checksum them) and then remove the one from C: and the same from config. That's it.

    Before that maybe you can run https://docs.microsoft.com/en-us/sysinternals/downloads/procmon to confirm this is where snapraid is writing.

    Edit: now that I'm thinking actually the content file is read once at start and written at the end (or at autosave time which is 1000 for you so not crazy low). So maybe is best to find out precisely what's going on.

    It is NOT the case for snapraid (unless you find something I'm missing) but generally speaking it's very bad to have software that writes tons of data under the default temp folders that are now most likely on SSDs. For example duplicati is very likely to be used with laptops that have small (sometimes VERY hard to replace) internal SSDs but big external drives. If you have something like even one 8TB USB drive and you do a couple backups, tests, restores (you can do local-cloud or attach another usb drive and do local-local) that go all through your SSD you'll kill your machine in no time.

     

    Last edit: John 2018-04-30
  • DCMAKER

    DCMAKER - 2018-05-01

    okay so if i understand correctly you are saying i can just delete the content file from my C drive? That is what you think is the issue?

    I do have a content file on every other drive ATM so i am not concerned about that getting lost in a crash.

    If deleting the content file doesn't solve the issue any other issues you cna think of?

    So all I need to do is delete the content file and put a # in front of C: content file and run a sync to fix snapraid after doing so?

    Also for some reason SnapRAID auto saved after 500GB. I originally set 500GB but changed it to 1000GB but it still does 500GB....is there a reason why it didn't update?

     
  • John

    John - 2018-05-02

    I would investigate first without changing anything, this way you're sure to get the root cause instead of assuming something fixed it. Run procmon from the link above (note it's a microsoft.com freeware, not some possible trojan or something), you'll need to do some filtering probably but it's very easy to use - and if something is hitting your disk C: so much it should be very clear what.

    Yes, removing the content file should be just commenting out in snapraid.conf (and a sync just to see everything is good; actually if the array was synced before it would be afterwards as well).

    Not sure about the precise autosave count; it would surprise me to read the setting from anywhere else than snapraid.conf, I don't see why. But again I don't know and I haven't tested this.

     
  • Leifi Plomeros

    Leifi Plomeros - 2018-05-02

    The autosave amount is the sum of processed data from all data disks collectively.
    If sync is processing both data disks in parallell 1000 GB = 500 GB per disk.
    If only one disk is read it is 1000 GB on that disk.

     
    • DCMAKER

      DCMAKER - 2018-05-03

      I know that but for some reason its auto saving at 500GB stil :/ I guess i can just try 2000GB

       
  • DCMAKER

    DCMAKER - 2018-05-11

    I am still investigating but that did appear to reduce a lot of the writes. It reads still a crap ton from my SSD but i am not seeing massive amount of writes like i was.

     

Log in to post a comment.

MongoDB Logo MongoDB