Hello again. A couple of questions about parity disks before I setup SnapRAID:
I will be doing triple parity with 21 data drives (24 4TB drives in total)
Is it possible for the parity drives to be mounted as Volume Mount points in Windows Server 2008R2 with NTFS? When I tried this initially it complained the the drive id was the same for both parity drives (initially tried 10 data and 2 parity drives as a test).
For 4TB drives I get a formatted space of 3.62TB. Due to the issue of the parity drive having a small amount of overhead (there will be no data on the parity drive - just the parity file) is it recommended to create a smaller sized partition than the max on the data drives?
Are there any best practices for this if you don't want to repartition your drives - i.e. any way to keep the data drives from being filled up (besides manually managing it)? I'm looking at Stablebit DrivePool, so I'll check if it can reserve free space on a data drive.
Thanks again for all your help. Hopefully I'll get SnapRAID setup this week and we'll see how long it takes to set the parity up on 76TB of data...!
If you would like to refer to this comment somewhere else in this project, copy and paste the following link:
Would like to hear what you chose for pooling. Would love to use my drivepool licence but sadly drivepool isn't able to pool from truecrypt containers/drives :(
Looking for stable good working alternatives
If you would like to refer to this comment somewhere else in this project, copy and paste the following link:
Yeah, since the complete sourcecode audit sofar didn't show ANY compromise 7.1a works fine for me and almost everybody else I know - and there are a lot of people I know that are using it. I'm full aware of the "security issue" that no one knows about. Apart from that there is no other software that has a likly audited open-source codebase that supports pre-boot auth etc..
Imho there is too mouch trouble about the "insecurity" - time will tell.
If you would like to refer to this comment somewhere else in this project, copy and paste the following link:
Okay, I just wanted to make sure you knew that Truecrypt was EOL and that newer versions should not be used. I don't use Windows on my home SnapRAID box (linux), so I can't provide any other viable open source full disk encryption options on Windows other than Bitlocker, but it's not open source, so that doesn't meet your criteria. It's too bad to see Truecrypt end this way. It was/is a great application.
On Linux, I use LUKS which meets both criteria, open source and pre-boot authentication. I normally encrypt the OS disk, and once it's unlocked, it automatically decrypts my data disks. This prevents me needing to enter passwords for each of my many data disks :)
If you would like to refer to this comment somewhere else in this project, copy and paste the following link:
You could use NTFS quotas or your could use Storage Spaces. I don't think there is an accepted best practice, but it probably doesn't hurt to short format something like 50 GB on the data drives. You can always grow this partition if you really need the space. For simplicity (of recovery) I'd personally just go with that. But overhead really depends on the average size of your files.
If you would like to refer to this comment somewhere else in this project, copy and paste the following link:
I use flexraid and it works great. What's unusual is that I was the absolute simplest pool solution. I don't want any duplication, striping, spanning, funny directories, replication, or anything else. I just want all of the selected drives to show up as a single drive.
What sucks is that it takes the most expensive solution to do the least. But it works perfectly.
If you would like to refer to this comment somewhere else in this project, copy and paste the following link:
How does flexraid pooling handle writing?
If I have pooled volume A:\ and copy a bunch of files to A:\SomeFolder\
Will it understand that I want to put those files in the existing SomeFolder on the only disk where such a folder exist?
That is pretty much the only feature I miss in Liquesce and without it I'm stuck manually selecting which disk to write to always or suffer a complete mess if I ever want to move to another solution.
If you would like to refer to this comment somewhere else in this project, copy and paste the following link:
No idea. I don't write to the pool. I write to individual disks. That's what is so frustrating about pooling software. I want the most basic function "Put the selected drives together" and it was almost impossible to find.
If you would like to refer to this comment somewhere else in this project, copy and paste the following link:
I've told it that I want E:\, F:\, G:\, H:\, I:\ and T:\Temp\ to be presented as A: and the result is that I have a 20TB volume A: which immediately reflects any changes that I make to E:\, F:\, G:\, H:\, I:\ or T:\Temp\
And it is free.
If you would like to refer to this comment somewhere else in this project, copy and paste the following link:
I tried it a month or two ago and didn't like it. I don't remember quite why. I think I've tried just about all of them that I can find. I may reinstall and look at it again.
If you would like to refer to this comment somewhere else in this project, copy and paste the following link:
Back on original topic.
If you want to avoid repartitioning you can just make a folder on each disk and tell snapraid in the config-file to ignore those folders and put ~20 GB junk files in each.
After that you can fill the data disks to 100% without worrying that the parity file will grow too big since you will always have 20GB hidden from snapraid.
If you want to be on the safe side you can put 50-100 GB junk in there, and when the first disk is full you should probably have a good idea about how much buffer junk you really need and can safely delete some of it.
If you would like to refer to this comment somewhere else in this project, copy and paste the following link:
Hello again. A couple of questions about parity disks before I setup SnapRAID:
I will be doing triple parity with 21 data drives (24 4TB drives in total)
Is it possible for the parity drives to be mounted as Volume Mount points in Windows Server 2008R2 with NTFS? When I tried this initially it complained the the drive id was the same for both parity drives (initially tried 10 data and 2 parity drives as a test).
For 4TB drives I get a formatted space of 3.62TB. Due to the issue of the parity drive having a small amount of overhead (there will be no data on the parity drive - just the parity file) is it recommended to create a smaller sized partition than the max on the data drives?
Are there any best practices for this if you don't want to repartition your drives - i.e. any way to keep the data drives from being filled up (besides manually managing it)? I'm looking at Stablebit DrivePool, so I'll check if it can reserve free space on a data drive.
Thanks again for all your help. Hopefully I'll get SnapRAID setup this week and we'll see how long it takes to set the parity up on 76TB of data...!
Sorry, I don't know about this on Windows, but I'd love to hear more about your hardware and see some pictures.
As soon as I have it up and running I'll be happy to post pics, etc!
Would like to hear what you chose for pooling. Would love to use my drivepool licence but sadly drivepool isn't able to pool from truecrypt containers/drives :(
Looking for stable good working alternatives
Do you still feel confident using Truecrypt with it being EOL and it's reported compromise?
http://www.theregister.co.uk/2014/05/28/truecrypt_hack/
https://gist.github.com/ValdikSS/c13a82ca4a2d8b7e87ff
Last edit: rubylaser 2014-06-20
Yeah, since the complete sourcecode audit sofar didn't show ANY compromise 7.1a works fine for me and almost everybody else I know - and there are a lot of people I know that are using it. I'm full aware of the "security issue" that no one knows about. Apart from that there is no other software that has a likly audited open-source codebase that supports pre-boot auth etc..
Imho there is too mouch trouble about the "insecurity" - time will tell.
Okay, I just wanted to make sure you knew that Truecrypt was EOL and that newer versions should not be used. I don't use Windows on my home SnapRAID box (linux), so I can't provide any other viable open source full disk encryption options on Windows other than Bitlocker, but it's not open source, so that doesn't meet your criteria. It's too bad to see Truecrypt end this way. It was/is a great application.
On Linux, I use LUKS which meets both criteria, open source and pre-boot authentication. I normally encrypt the OS disk, and once it's unlocked, it automatically decrypts my data disks. This prevents me needing to enter passwords for each of my many data disks :)
You could use NTFS quotas or your could use Storage Spaces. I don't think there is an accepted best practice, but it probably doesn't hurt to short format something like 50 GB on the data drives. You can always grow this partition if you really need the space. For simplicity (of recovery) I'd personally just go with that. But overhead really depends on the average size of your files.
therealjmc: I haven't used it personally, but from the posts in this forum it seems that the pooling FlexRAID offers seems to be the most mature one.
I use flexraid and it works great. What's unusual is that I was the absolute simplest pool solution. I don't want any duplication, striping, spanning, funny directories, replication, or anything else. I just want all of the selected drives to show up as a single drive.
What sucks is that it takes the most expensive solution to do the least. But it works perfectly.
How does flexraid pooling handle writing?
If I have pooled volume A:\ and copy a bunch of files to A:\SomeFolder\
Will it understand that I want to put those files in the existing SomeFolder on the only disk where such a folder exist?
That is pretty much the only feature I miss in Liquesce and without it I'm stuck manually selecting which disk to write to always or suffer a complete mess if I ever want to move to another solution.
No idea. I don't write to the pool. I write to individual disks. That's what is so frustrating about pooling software. I want the most basic function "Put the selected drives together" and it was almost impossible to find.
That is exactly how I use Liquesce today.
I've told it that I want E:\, F:\, G:\, H:\, I:\ and T:\Temp\ to be presented as A: and the result is that I have a 20TB volume A: which immediately reflects any changes that I make to E:\, F:\, G:\, H:\, I:\ or T:\Temp\
And it is free.
I tried it a month or two ago and didn't like it. I don't remember quite why. I think I've tried just about all of them that I can find. I may reinstall and look at it again.
Back on original topic.
If you want to avoid repartitioning you can just make a folder on each disk and tell snapraid in the config-file to ignore those folders and put ~20 GB junk files in each.
After that you can fill the data disks to 100% without worrying that the parity file will grow too big since you will always have 20GB hidden from snapraid.
If you want to be on the safe side you can put 50-100 GB junk in there, and when the first disk is full you should probably have a good idea about how much buffer junk you really need and can safely delete some of it.