Wasn't the last sentence in your last paragraph (before my post) "When the sync software realized that 90% och the photos had gone missing, they were removed from the Google Drive as well :-(". How is it NOT related to that what I wrote? Are you having trouble parsing my simple post (I really can't make it simpler for somebody who can't even see how it's related to syncing to Google Drive) or are there multiple people using your account?
If you're using the official backup&sync client (which sucks big time) there's an option to remove things everywhere once removed here (whatever the wording is) - you need to uncheck that if you want a backup, for obvious reasons. Still probably you should find in the trash everything that was removed on the drive. If using rclone it is highly recommended to use "backupdir" option (preferably with a different folder each run generated from the timestamp) - this would make removed/changed (think of...
Generally speaking unless you are using autosave the progress of a sync will be lost if interrupted unexpectedely. autosave SIZE_IN_GIGABYTES Automatically save the state when syncing or scrubbing after the specified amount of GB processed. This option is useful to avoid to restart from scratch long "sync" commands interrupted by a machine crash, or any other event that may interrupt SnapRAID.
Generally speaking we have deduplication (at least but not only) for NTFS and btrfs. Snapraid as a normal user-space app just sees the files as normal files and works fine, except for using more parity space than really "needed". Sure, normally you should use parity drives that are a bit bigger than your bigger disk but if your deduplication is any good you might end up needing MUCH larger disks. It's clear that's a major undertaking but some way to have this duplicated data take space only once...
I would investigate first without changing anything, this way you're sure to get the root cause instead of assuming something fixed it. Run procmon from the link above (note it's a microsoft.com freeware, not some possible trojan or something), you'll need to do some filtering probably but it's very easy to use - and if something is hitting your disk C: so much it should be very clear what. Yes, removing the content file should be just commenting out in snapraid.conf (and a sync just to see everything...
From what I see you just have one content file on C: so if you don't have any swapping that should be your issue. It's very easy to use "one less" content file; just do a sync, make sure all files are updated (you can even checksum them) and then remove the one from C: and the same from config. That's it. Before that maybe you can run https://docs.microsoft.com/en-us/sysinternals/downloads/procmon to confirm this is where snapraid is writing. Edit: now that I'm thinking actually the content file...
From what I see you just have one content file on C: so if you don't have any swapping that should be your issue. It's very easy to use "one less" content file; just do a sync, make sure all files are updated (you can even checksum them) and then remove the one from C: and the same from config. That's it. Before that maybe you can run https://docs.microsoft.com/en-us/sysinternals/downloads/procmon to confirm this is where snapraid is writing. It is NOT the case for snapraid (unless you find something...
Yes, the only thing you need is to enable the time stamp thingy in veracrypt and everything will work fine with snapraid BUT: how do you use your containers? Do you use them as some kind of archives (like you put all the documents from 2017 and keep them in one container) and usually don't change anything in them? That is fine. However if you have your "working drive" with many things that change all the time, with a lot of documents, maybe browser profile and so on then having snapraid include such...
Drobo is absolutely not a good fit: - it will reject drives for just one error - if it rejects more than 1 (for normal parity) or 2 (for dual parity) then your data is gone (including if this happens during the reconstruction that can take days - note even if you don't replace a drive after drobo fails a drive it shuffles a lot of data across the remaining disks) - the format is proprietary and you can't get anything back if something is wrong, apart from the fact that you can't just copy your files...
Recent rumors about crashplan: https://www.reddit.com/r/DataHoarder/comments/6brlnk/crashplan_ending_personal_accounts/
I finally managed to sit down and parse the well-organized first post. I don't know...
Thanks a lot, I installed it and it seems to be CONSIDERABLY faster than mhddfs,...
I'm using mhddfs. What am I missing, beside speed on slow cores (it is a fuse fi...
I think snapraid really needs something similar to rsync's --max-delete feature,...
Midnight Commander has now the patch for nanosecond (I don't follow the versions...
I've had a couple silent errors on my main data drive on files I wasn't touching...
Given that I haven't found yet any graphical (and I looked for both X and ncurses,...
There's something with the file here: Error opening file 'bench/disk1/a/.xpfblg2'....
https://github.com/amadvance/snapraid/releases/download/v11.0/snapraid-11.0.tar.gz...
There's no way to detect changes to a file + corruption if they happen between snapraid...
Is "Monkey.flac" actually in the "root of the array", i.e. do you have in snapraid.conf...
This is a bit tricky, I've been hit by this too a couple times - it is probably because:...
There are very, very, very few and convoluted scenarious where a file is changed...
I don't use snapraid on windows but I do believe ReFs would be a good option for...
The parity needs for each disk to be larger than the size of the files on that disk...
Correct, the option is in truecrypt/veracrypt itself. Keep in mind that changing...
There's no need to apologize, also you don't need to really have some special skills...
You can try to copy some files there, preferably some that don't compress well (like...
Even if it might seem that I find your idea redundand I don't - at all - it's ALWAYS...
One solution, granted fiddlier than one would like, would be to exclude some files/folders...
The size of the already sync-ed data doesn't matter much, what does matter is how...
Ok, now it starts to get extremely nasty... I said ok, let me try some GUI file managers,...
Well it isn't that easy ... there is a reason why I mentioned "move", it is just...
Actually I'm getting more and more frustrated with the shortcomings of many linux...
-U (or --force-uuid, it's the same thing) isn't "safe", on the contrary you should...
If the sync went through I think everything should be fine and the new UUID stored...
I really couldn't believe ... after rsync just recently having support for sub-second...
Do you have enough space available there? Can you try with the same user in the same...
Frankly I'm having more trouble to find a way to connect a large number of drives...
USB sticks in RAID-1, firefox, plex, jdownloader ... this isn't something you "set...
The last "parity" from things like "parity_error:5277559:parity" says that the error...
The names are of course synced; it would be a big mess if you change the names of...
I'm using "F3" from Far Manager to open large files. What do you mean by "renamed"?...
I would be EXTREMELY surprised if any snapraid user would be hit by something like...
I checked both with 10 stable and last 11 beta, it is the same.
When running a check or fix with something like snapraid -v -l /tmp/test -f wkfhlkwhflkh4lk...
I third that!
Snapraid will not change your data (unless restoring), if you lose the parity you...
Is that a snapraid content file? If yes that's the sign of a much bigger issue that...
I'm having the same issue/question: https://sourceforge.net/p/snapraid/discussio...
Well frankly in the triple digits ($$$) you're better off with any small desktop...
If you encrypt the whole device snapraid will run for sure but saving (encrypted)...
I think you are hit by the "-p" parameter: -p, --percentage PERC Selects the part...
Yes, this is it - I asked for this "feature". Negative is good and "more negative"...
Yes, this is it - I asked for this "feature". Negative is good and "more negative"...
Thanks a lot, using it for the last few days. I noticed this (see my post above),...
This is not a joke, really. Really, I do have a basic understanding of shell expansion...
I have one 2TB ext4 with 1.6 million files (digital pictures, lots of them taken...
You don't get the content from (for example) /mnt/music mounted under /mnt/pictures...
Is there any reason I'm missing to not use proper UUID mounts? I can see some where...
This was my request: http://sourceforge.net/p/snapraid/discussion/1677233/thread/253fbb2f/...
Do you have all the important data you want in a safe place? If yes, you can proceed...
The documentation does mention this under limitations: "The main one is that if a...
FYI I think the exit codes changed in snapraid 8 and now snapraid-runner bails out...
First you need to stop changing anything until you understand what's going on. This...
If you were changing a file while snapraid was running, yes there would be error...
I don't have the time and energy for now to either check the source or test how is...
I've been using the 8 betas for quite a while, all fine. Thank you again for the...
I don't see any trouble at all with that.
13.3% is worse than russian roulette with 5 bullets and one empty. Better than zero...
As I mentioned in the configuration proposed: 1x(2+2) TB data disk (empty) 1x(2+2)...
As I mentioned in the configuration proposed: 1x(2+2) TB data disk (empty) 1x(2+2)...
I have a problem with combining RAID-0 (or similar) with RAID for no good reason....
Normally you need the parity to be larger than the maximum of: size of files + nr...
Normally you need the parity to be larger than the maximum of: size of files + nr...
First of all I'm not sure about the penalty of having some (non-snapraid) data on...
Thank you very much, I just installed the beta from today and works as expected!
Normally you can test with badblocks but that will stress the drive a lot and I strongly...
You have a broken disk, making it read-only is standard kernel's way of dealing with...
I'm bumping this, as it is pretty straightforward - in status.c make wasted int64_t...
Hundreds of thousands of files is absolutely nothing to write home about. My linux...
It would solve the problem, roughly, maybe there would be still some small overhead...
8GB average size on 3000GB disk means 375 files, with 128 KiB wastage average per...
Hi Andrea, now that I realize we already have what I wanted all the time ... this...
I feel so stupid and I've been wondering all the time what "Wasted" might be ......
It is hard to say what is "more" user friendly, never mind MUCH more - in this case...
That still wouldn't be much change compared to what we have now, the only difference...
This will be clean only if you have already ALL the data you want to put on the disks...
5-8% sounds a bit much for large files. What's the maximum number of files you have...
On ext filesystems the big waste (apart from the trivial 5% reserved for root) is...
That isn't a bash script, is the actual source code from the actual code that runs...
The beauty of having the source available. This is coming from: if HAVE_FSYNC int...
The beauty of having the source available. This is coming from: if HAVE_FSYNC int...
Depending on brand and model you may significantly exceed the max yearly workload...
Yea, I have snapraid + mhddfs myself, everything working as it should (have the 7...
Have a look inside /etc/mke2fs.conf For me it shows: largefile = { inode_ratio =...
Have a look inside /etc/mke2fs.conf For me it shows: largefile = { inode_ratio =...
Keep in mind that in any modern computer writes to any filesystem are writes to memory...
There is something wrong with that disk (U:). Try hdtune on it (the long check, last...
Assuming you are not starting a recovery (in which case the safest move would be...