From: Sorin S. <sor...@or...> - 2011-01-13 12:56:00
Attachments:
smime.p7s
|
Hi all, While this question is not strictly BackupPC, I think the most knowledge regarding backup-systems are available here. Please bear with me, or send me off. 8-) I'm planning to build a new backup server with 6x 1TB disks in a software raid-0 fashion á la my old 1,5TB-system, only bigger and faster. Since I'd like to have those six TB's as one big partition for simplicities sake, apparantely I need to use something called GPT disks and create the partitions using parted according to Google. GPT disks I've never used before and parted I believe is terminal equivalent of gparted, which I've used before. Are there any gotchas' from BackupPC's, or from the OS's (CentOS 5.5 x64), point of view I should be aware of in your opinion? Thanks for any hints. -- BW, Sorin ----------------------------------------------------------- # Sorin Srbu [Sysadmin, Systems Engineer] # Dept of Medicinal Chemistry, Phone: +46 (0)18-4714482 >3 signals> GSM # Div of Org Pharm Chem, Mobile: +46 (0)701-718023 # Box 574, Uppsala University, Fax: +46 (0)18-4714482 # SE-751 23 Uppsala, Sweden Visit: BMC, Husargatan 3, D5:512b # Web: http://www.orgfarm.uu.se ----------------------------------------------------------- # () ASCII ribbon campaign - Against html E-mail # /\ # # MotD follows: # Luck is a relative term, and may be revised with time. That time is now... |
From: Sorin S. <sor...@or...> - 2011-01-13 13:10:48
Attachments:
smime.p7s
|
I forgot to mention that the OS itself will sit on a separate disk of its own. Only the backup-space will be on the GPT 6TB-raid array. I think this will take care of the problem of CentOS not installing on a GPT-partition. -- /Sorin >-----Original Message----- >From: Sorin Srbu [mailto:sor...@or...] >Sent: Thursday, January 13, 2011 1:39 PM >To: 'General list for user discussion, questions and support' >Subject: [BackupPC-users] OT: New backup server with 6TB disk space > >Hi all, > >While this question is not strictly BackupPC, I think the most knowledge >regarding backup-systems are available here. Please bear with me, or >send me off. 8-) > >I'm planning to build a new backup server with 6x 1TB disks in a >software raid-0 fashion á la my old 1,5TB-system, only bigger and faster. > Since I'd like to have those six TB's as one big partition >for simplicities sake, apparantely I need to use something called GPT disks >and create the partitions using parted according to Google. > >GPT disks I've never used before and parted I believe is terminal >equivalent of gparted, which I've used before. > >Are there any gotchas' from BackupPC's, or from the OS's (CentOS 5.5 >x64), point of view I should be aware of in your opinion? > >Thanks for any hints. |
From: Sorin S. <sor...@or...> - 2011-01-13 13:52:25
Attachments:
smime.p7s
|
>-----Original Message----- >From: Michael 'Moose' Dinn [mailto:mic...@ai...] >Sent: Thursday, January 13, 2011 2:16 PM >To: Sorin Srbu >Subject: Re: [BackupPC-users] OT: New backup server with 6TB disk space > > >You don't need GPT at all for this. > >Install the OS on a 300G disk (or whatever), then make a single >partition on >all of your 1T disks. > >Then, create your large raid disk: > >mdadm --create /dev/md1 -n 6 -l 5 /dev/sd[bcdefg]1 > >then just mke2fs -t ext4 /deev/md1 That would work too? Then what's all this with "GPT is a must if you want to use sizes bigger than 2TB"? >(and yes that's a raid5, not raid 0... raid 0 for backups is not your >friend!) Ah, yes... I know. 8-) However, last time I used a raid5 with BPC the backup-storage got all screwed up for some reason after having worked flawlessly for some time, necessitating a complete rebuild of the array. Since then I'm not too keen on raid5, although raid5 would theoretically be ideal. 8-/ I feel raid5 gives me a false sense of security. I do however run raid5 on a Dell PE with a PERC-raid controller. There it has worked fine for a few years now. Maybe software raid5 isn't as stable as its hardware counterpart. Speculations, speculations... -- /Sorin |
From: Sorin S. <sor...@or...> - 2011-01-13 14:29:22
Attachments:
smime.p7s
|
>-----Original Message----- >From: Michael 'Moose' Dinn [mailto:mic...@ai...] >Sent: Thursday, January 13, 2011 2:58 PM >To: sor...@or...; General list for user discussion, >questions and support >Subject: Re: [BackupPC-users] OT: New backup server with 6TB disk space > >> That would work too? Then what's all this with "GPT is a must if you want to >> use sizes bigger than 2TB"? > >if you want to use a PARTITION size larger than 2T. If you just use the >whole disk you're fine. Doh! That was really obvious of course... Thanks for pointing it out though. 8-} -- /Sorin |
From: Pavel H. <pav...@iv...> - 2011-01-13 13:10:50
|
Sorin Srbu napsal(a): > I'm planning to build a new backup server with 6x 1TB disks in a software > raid-0 fashion á la my old 1,5TB-system, only bigger and faster. 6 drives RAID0 in a backup server - pretty high chance of loosing all your backups soon. How about 6 2TB drives in raid10? Do you have any plans for offline backup to external drives in case of fire or power surge? Regards, Pavel. |
From: Sorin S. <sor...@or...> - 2011-01-13 13:45:01
Attachments:
smime.p7s
|
>-----Original Message----- >From: Pavel Hofman [mailto:pav...@iv...] >Sent: Thursday, January 13, 2011 2:11 PM >To: sor...@or...; General list for user discussion, >questions and support >Subject: Re: [BackupPC-users] OT: New backup server with 6TB disk space > >Sorin Srbu napsal(a): >> I'm planning to build a new backup server with 6x 1TB disks in a software >> raid-0 fashion á la my old 1,5TB-system, only bigger and faster. > >6 drives RAID0 in a backup server - pretty high chance of loosing all >your backups soon. How about 6 2TB drives in raid10? > >Do you have any plans for offline backup to external drives in case of >fire or power surge? I know, I've been thinking about that too, but so far under my ten years at this department the issue of getting lost data back to the user fast is principal. OTOH, should the raid0 die unexpectedly I'm in trouble anyway. For the Windows-clients, I have a Volume Shadow Copy-solution in hand as a temporary middle-solution. The linux-clients, the data is also available on the client side for a while. The users know to backup their most important docs to sticks and whatnot. Our backup-solution is the last resort only really. FWIW, I've run raid0-arrays for quite a while now, w/o too much problems. Especially the linux-variety of software raid0 has been pretty impressive IMO. The clients and backup-server are located on separate locations, and we consider that secure enough. I've been running BPC for a couple of years now, and still nobody has asked for a file-restore. 8-/ Anyway, I believe I'll lose too much space with a raid10-solution. And again, I know of the risk with raid0. The bigger they get, the higher the risk is for a failure sooner or later. -- /Sorin |
From: Pavel H. <pav...@iv...> - 2011-01-13 15:04:24
|
Sorin Srbu napsal(a): >> >> Sorin Srbu napsal(a): >>> I'm planning to build a new backup server with 6x 1TB disks in a software >>> raid-0 fashion á la my old 1,5TB-system, only bigger and faster. >> 6 drives RAID0 in a backup server - pretty high chance of loosing all >> your backups soon. How about 6 2TB drives in raid10? >> >> Do you have any plans for offline backup to external drives in case of >> fire or power surge? > > I know, I've been thinking about that too, but so far under my ten years at > this department the issue of getting lost data back to the user fast is > principal. > OTOH, should the raid0 die unexpectedly I'm in trouble anyway. > The users know to backup their most important docs to sticks and > whatnot. Our backup-solution is the last resort only really. Some history of previous versions is important for us too, and the backuppc server is the only source of historic data here. > > FWIW, I've run raid0-arrays for quite a while now, w/o too much problems. > Especially the linux-variety of software raid0 has been pretty impressive > IMO. For 6+ SW raids we experience drives being kicked out of arrays. Not necessarily a failed drive, sometimes moving to a new MB/case and booting with a loose cable, or a minor recoverable glitch in the drive. A simple re-adding has always cured the situation. With RAID0 we would have been screwed many times. > > The clients and backup-server are located on separate locations, and we > consider that secure enough. I've been running BPC for a couple of years > now, and still nobody has asked for a file-restore. 8-/ We use the file-restore feature almost daily :) |
From: Sorin S. <sor...@or...> - 2011-01-13 15:36:12
Attachments:
smime.p7s
|
>-----Original Message----- >From: Pavel Hofman [mailto:pav...@iv...] >Sent: Thursday, January 13, 2011 4:04 PM >To: sor...@or...; General list for user discussion, >questions and support >Subject: Re: [BackupPC-users] OT: New backup server with 6TB disk space > > > >Some history of previous versions is important for us too, and the >backuppc server is the only source of historic data here. Some explaining is perhaps necessary. Our BPC-project started because the *nix-users needed to backup some really large molecular modeling projects. These are easily in the 5+GB range each, and the users produce sometime five or so a day each. With four (current) users doing this we fill up storage fast. With this in mind and the time this was begun, storage was still pretty expensive. This means we only keep about 5-6 full backups, depending on how much storage is available. That's our historic data. >For 6+ SW raids we experience drives being kicked out of arrays. Not >necessarily a failed drive, sometimes moving to a new MB/case and >booting with a loose cable, or a minor recoverable glitch in the drive. >A simple re-adding has always cured the situation. With RAID0 we would >have been screwed many times. Hmmm... Apart from the raid5-problems I had some time ago, our SW raids have been functioning very nicely indeed (knock on wood...). >> The clients and backup-server are located on separate locations, and we >> consider that secure enough. I've been running BPC for a couple of years >> now, and still nobody has asked for a file-restore. 8-/ > >We use the file-restore feature almost daily :) Darn those sloppy users... ;-) I try to restore files and folders from time to time, just to check things out. -- /Sorin |
From: Les M. <les...@gm...> - 2011-01-13 15:59:14
|
On 1/13/2011 9:36 AM, Sorin Srbu wrote: > >> Some history of previous versions is important for us too, and the >> backuppc server is the only source of historic data here. > > Some explaining is perhaps necessary. Our BPC-project started because the > *nix-users needed to backup some really large molecular modeling projects. > These are easily in the 5+GB range each, and the users produce sometime five > or so a day each. With four (current) users doing this we fill up storage > fast. With this in mind and the time this was begun, storage was still pretty > expensive. This means we only keep about 5-6 full backups, depending on how > much storage is available. That's our historic data. Is there a lot of block-level duplication among these? If so it might be worth setting up something with zfs. You should be able to run the free version of nexentastor on something that size - not sure how it would work with backuppc. -- Les Mikesell les...@gm... |
From: Sorin S. <sor...@or...> - 2011-01-14 08:37:54
Attachments:
smime.p7s
|
>-----Original Message----- >From: Les Mikesell [mailto:les...@gm...] >Sent: Thursday, January 13, 2011 4:59 PM >To: bac...@li... >Subject: Re: [BackupPC-users] OT: New backup server with 6TB disk space > > >Is there a lot of block-level duplication among these? If so it might >be worth setting up something with zfs. How can I tell? >You should be able to run the >free version of nexentastor on something that size - not sure how it >would work with backuppc. My next question was going to be about filesystems. 8-) Ext3 has so far with 1.5TB worked fine and had decent performance, if not a speed demon. Xfs isn't supported yet I think in CentOS. Zfs focuses mainly on really big files IIRC. Ext4 would maybe have been a good choice, but isn't supported in CentOS either if I'm not mistaken. I'd prefer sticking with CentOS and BPC, as both are relatively known now at the IT-department, ie me. 8-) Any particular reason I should look into Nexentastor? The Zfs? -- /Sorin |
From: Sorin S. <sor...@or...> - 2011-01-14 11:01:39
Attachments:
smime.p7s
|
>-----Original Message----- >From: Daniel Berteaud [mailto:da...@fi...] >Sent: Friday, January 14, 2011 10:02 AM >To: sor...@or...; General list for user discussion, >questions and support >Subject: Re: [BackupPC-users] OT: New backup server with 6TB disk space > >Hi, here're my advices: > >- First, don't bother with GPT, just create a LVM PV on top of your RAID >device, then, create a Volume Group, and one big Logical Volume. LVM is >great for storage management, and I'd strongly encourage its usage for >such a big hard drive. How does the LVM do if I need it to be a raid0-array. Does it even care? Before I've always added partitions the "old-fashioned way" with fstab-entries, you "/dev/md0 /bak" etc. >> Ext3 has so far with 1.5TB worked fine and had decent performance, if not a >> speed demon. Xfs isn't supported yet I think in CentOS. > >AFAIK XFS is supported on CentOS 5.5 64bits (not on 32 bits) Oh, that might explain some... I've only looked at 32b CentOS, as that is what the current BPC-server runs. Never bothered. Thanks for the heads-up! >> Ext4 would maybe have been a good choice, but isn't >> supported in CentOS either if I'm not mistaken. > >EXT4 is available as a technology preview in CentOS 5.5, and will be >fully supported by redhat in rhel 5.6 (available since yesterday, CentOS >5.6 should follow in about one month). Yepp, RHEL 5.6 wasn't announced when I wrote that. 8-) >I personally use ext3 most of the time for BackupPC, with some quite >large partitions (bigger actually is 4To). It's working great, and is >very reliable. But for your usage (lots of big files), you may take >advantage of ext4 extent allocation, so, I'd go for ext4. I'd say approx 25-30% are huge files, 50% really small files (approx 4-10kB each, and millions of them) and the rest in between. Would you still recommend ext4 then? My main problem with ext3 is that deletion of whole folder-structures, with a million or so files and folders in them, takes forever to complete. Not that I do it very often, but when... -- /Sorin |
From: Sorin S. <sor...@or...> - 2011-01-17 09:34:13
Attachments:
smime.p7s
|
>-----Original Message----- >From: Daniel Berteaud [mailto:da...@fi...] >Sent: Friday, January 14, 2011 3:57 PM >To: sor...@or...; General list for user discussion, >questions and support >Subject: Re: [BackupPC-users] OT: New backup server with 6TB disk space > >> How does the LVM do if I need it to be a raid0-array. Does it even care? >> Before I've always added partitions the "old-fashioned way" with >> fstab-entries, you "/dev/md0 /bak" etc. > >You just have to create LVM on top of the RAID: > >pvcreate /dev/md0 >vgcreate backups /dev/md0 >lvcreate -n backuppc -L5To backups > >Now, you can format /dev/backups/backuppc as if it was a "standard" >partition. Nice, thanks! >Ext4 is faster than ext3 with both small and big files, so I'd choose >ext4. Ok, looking into it some more then, ext4 and zfs. I've already used ext4 on a test-laptop and on my PVR I have at home (both via OpenSUSE). It seems to be doing good, but I've never tried ext4 out in some more "serious" scenarios. -- /Sorin |
From: Sorin S. <sor...@or...> - 2011-01-20 10:15:35
Attachments:
smime.p7s
|
>-----Original Message----- >From: Sorin Srbu [mailto:sor...@or...] >Sent: Monday, January 17, 2011 10:34 AM >To: 'General list for user discussion, questions and support' >Subject: Re: [BackupPC-users] OT: New backup server with 6TB disk space > >>You just have to create LVM on top of the RAID: >> >>pvcreate /dev/md0 >>vgcreate backups /dev/md0 >>lvcreate -n backuppc -L5To backups >> >>Now, you can format /dev/backups/backuppc as if it was a "standard" >>partition. Got another one for you guys; disk alignment. There's much talk about "aligning" disks in order to maximize disk performance. Truth to tell, I don't quite understand how to do it (calculation-wise), and then deciding when and if I should do it when reading the how-to articles Google finds for me. Is there any filesystem, or applet, that can do this automatically for me? -- /Sorin |
From: Les M. <les...@gm...> - 2011-01-14 13:53:04
|
On 1/14/11 2:37 AM, Sorin Srbu wrote: > >> Is there a lot of block-level duplication among these? If so it might >> be worth setting up something with zfs. > > How can I tell? Ask someone who creates the large data sets if they copy any or all of it from something that already exists. Backuppc will not pool data if there is one byte of difference in a huge file, where block level de-dup in the filesystem would use space for unique blocks. If everyone is working with slightly different versions of the same data, you could easily save enough space to run a raid level that would protect against a disk failure. >> You should be able to run the >> free version of nexentastor on something that size - not sure how it >> would work with backuppc. > > My next question was going to be about filesystems. 8-) > > Ext3 has so far with 1.5TB worked fine and had decent performance, if not a > speed demon. Xfs isn't supported yet I think in CentOS. Zfs focuses mainly on > really big files IIRC. Ext4 would maybe have been a good choice, but isn't > supported in CentOS either if I'm not mistaken. > > I'd prefer sticking with CentOS and BPC, as both are relatively known now at > the IT-department, ie me. 8-) Any particular reason I should look into > Nexentastor? The Zfs? Nexentastor gives you a web admin interface so you don't need to know much about the underlying OS (which is opensolaris) or how to configure zfs. If you have new hardware it's pretty easy to drop in and test. You could even run it as an nfs server with the backuppc system running on something else. Otherwise if you can wait another month or so, I'd wait for CentOS6 and use xfs. Actually you'll probably want the OS on its own separate small partition(s) anyway so it doesn't matter that much - but I think RHEL6 (and thus eventually CentOS6) is supposed to install on xfs as an option. -- Les Mikesell les...@gm... |
From: Sorin S. <sor...@or...> - 2011-01-17 09:30:53
Attachments:
smime.p7s
|
>-----Original Message----- >From: Les Mikesell [mailto:les...@gm...] >Sent: Friday, January 14, 2011 2:53 PM >To: bac...@li... >Subject: Re: [BackupPC-users] OT: New backup server with 6TB disk space > >>> Is there a lot of block-level duplication among these? If so it might >>> be worth setting up something with zfs. >> >> How can I tell? > >Ask someone who creates the large data sets if they copy any or all of it from >something that already exists. Backuppc will not pool data if there is one byte >of difference in a huge file, where block level de-dup in the filesystem would >use space for unique blocks. If everyone is working with slightly different >versions of the same data, you could easily save enough space to run a raid >level that would protect against a disk failure. Ah, ok. Got it. Each user creates his or hers own datasets, which differ. Only rarely they start from the same starting-points. Unfortunately... 8-/ >>>Any particular reason I should look into Nexentastor? The Zfs? > >Nexentastor gives you a web admin interface so you don't need to know much about >the underlying OS (which is opensolaris) or how to configure zfs. If you have >new hardware it's pretty easy to drop in and test. You could even run it as an >nfs server with the backuppc system running on something else. Otherwise if you >can wait another month or so, I'd wait for CentOS6 and use xfs. Actually you'll >probably want the OS on its own separate small partition(s) anyway so it doesn't >matter that much - but I think RHEL6 (and thus eventually CentOS6) is supposed >to install on xfs as an option. Isn't Opensolaris dead now? I might try it - Nexentastore - out anyway, but planning to run CentOS 5.6 or 6 when either's released. I'm kinda' hoping CentOS 6 is worthwhile, especially with the new filesystems available. I tried the RHEL6 beta a while ago, and was less than impressed that the installer for some reason nuked the dual-boot Windows-partitions w/o asking on the test-box I had available at the time. 8-/ -- /Sorin |
From: Frédéric M. <fre...@ju...> - 2011-01-14 16:54:46
|
Le 14/01/2011 09:37, Sorin Srbu a écrit : > > My next question was going to be about filesystems. 8-) > > Ext3 has so far with 1.5TB worked fine and had decent performance, if not a > speed demon. Xfs isn't supported yet I think in CentOS. Zfs focuses mainly on > really big files IIRC. Ext4 would maybe have been a good choice, but isn't > supported in CentOS either if I'm not mistaken. > > I'd prefer sticking with CentOS and BPC, as both are relatively known now at > the IT-department, ie me. 8-) Any particular reason I should look into > Nexentastor? The Zfs? Hi, Does anyone has tested or used Gluster and Ext4 to store the backup files? Gluster creates a NAS from a server cluster, servers can be used as Ext4 filesystem. Regards. -- ============================================== | FRÉDÉRIC MASSOT | | http://www.juliana-multimedia.com | | mailto:fre...@ju... | ===========================Debian=GNU/Linux=== |