From: Frédéric M. <fre...@ju...> - 2012-10-04 13:07:11
|
Hi, The night's backup has not worked, in logs there is an error message on the hards links. I rebooted the server to do a fsck. It found no error, but did not start backuppc. In the log there : Can't create a test hardlink between a file in /var/lib/backuppc/pc and /var/lib/backuppc/cpool. Either these are different file systems, or this file system doesn't support hardlinks, or these directories don't exist, or there is a p Backuppc directory "/var/lib/backuppc" is in a logical volume "/dev/mapper/vg01-lv_backup". There are 4 SATA drives in RAID 10 with MD, and logical volumes for the system, swap and Backuppc directory. Partitions are in ext4. The df command indicates that all inodes are used. # df -h Filesystem Size Used Avail Use% Mounted on /dev/mapper/vg01-lv_backup 918G 627G 246G 72% /var/lib/backuppc # df -i Filesystem Inodes IUsed IFree IUse% Mounted on /dev/mapper/vg01-lv_backup 60186624 60186624 0 100% /var/lib/backuppc The inode number of the ext4 is static. - How can I do to increase the number of inodes? - I can replace one by one the 500GB drives with 1TB drive, increase the logical volume for Backuppc and the file system ? - Can we change the inode_ratio while resize a ext4 filesystem? Regards. -- ============================================== | FRÉDÉRIC MASSOT | | http://www.juliana-multimedia.com | | mailto:fre...@ju... | | +33.(0)2.97.54.77.94 +33.(0)6.67.19.95.69 | ===========================Debian=GNU/Linux=== |
From: Tyler J. W. <ty...@to...> - 2012-10-05 08:58:40
|
On 2012-10-04 19:56, Michael Stowe wrote: > >> The inode number of the ext4 is static. >> >> - How can I do to increase the number of inodes? > > The number of ext4 inodes are set when the ext4 volume is created, so, you > have to recreate the file system. Perhaps using an alternative to ext4. I wonder what caused this. My BackupPC filesystem was created with default mkfs.ext4, and has used far more disk space than inodes: Filesystem Size Used Avail Use% Mounted on /dev/md0 3.6T 1.6T 2.1T 43% /var Filesystem Inodes IUsed IFree IUse% Mounted on /dev/md0 244195328 4966307 239229021 3% /var Regards, Tyler -- "An Englishman, even if he is alone, forms an orderly queue of one." -- George Mikes |
From: Les M. <les...@gm...> - 2012-10-05 12:42:49
|
On Fri, Oct 5, 2012 at 3:58 AM, Tyler J. Wagner <ty...@to...> wrote: > On 2012-10-04 19:56, Michael Stowe wrote: >> >>> The inode number of the ext4 is static. >>> >>> - How can I do to increase the number of inodes? >> >> The number of ext4 inodes are set when the ext4 volume is created, so, you >> have to recreate the file system. Perhaps using an alternative to ext4. > > I wonder what caused this. My BackupPC filesystem was created with default > mkfs.ext4, and has used far more disk space than inodes: > > Filesystem Size Used Avail Use% Mounted on > /dev/md0 3.6T 1.6T 2.1T 43% /var > > Filesystem Inodes IUsed IFree IUse% Mounted on > /dev/md0 244195328 4966307 239229021 3% /var There must be a lot of tiny files that are not duplicates being backed up. If the source can be identified, maybe they could be tarred or zipped in a pre-user command instead of backing up those directories normally. If it were mine, I'd probably pull the 'miirror' set of drives out of the raid10 and add a new raid1 of a pair of 2 or 3TB drives formatted as XFS and start over, leaving the old set so you could switch back if you had to do a restore before you built sufficient history in the new archive. -- Les Mikesell les...@gm... |
From: Ray F. <ray...@av...> - 2012-10-05 15:25:15
|
On Fri, Oct 5, 2012 at 6:42 AM, Les Mikesell <les...@gm...> wrote: > > > I wonder what caused this. My BackupPC filesystem was created with default > > mkfs.ext4, and has used far more disk space than inodes: > > > > Filesystem Size Used Avail Use% Mounted on > > /dev/md0 3.6T 1.6T 2.1T 43% /var > > > > Filesystem Inodes IUsed IFree IUse% Mounted on > > /dev/md0 244195328 4966307 239229021 3% /var Les- We're using standard ext4 settings as well, and are doing just fine in the inode dept. Our disk is smaller, so we've got just over 100 Million inodes instead of your 244M. I found it interesting that your system and my system are using about the same number of inodes, however you're using 2x the space. # df -i /dev/backuppc Filesystem Inodes IUsed IFree IUse% Mounted on /dev/backuppc 100663296 4212939 96450357 5% /var/lib/BackupPC # df -h /dev/backuppc Filesystem Size Used Avail Use% Mounted on /dev/backuppc 1.5T 764G 672G 54% /var/lib/BackupPC That's only 5514 inodes/GByte used >From Frédéric Massot's original message: The df command indicates that all inodes are used. # df -h Filesystem Size Used Avail Use% Mounted on /dev/mapper/vg01-lv_backup 918G 627G 246G 72% /var/lib/backuppc # df -i Filesystem Inodes IUsed IFree IUse% Mounted on /dev/mapper/vg01-lv_backup 60186624 60186624 0 100% /var/lib/backuppc That's 65562 inodes/GB used. His original file system was more closely sized to his demand for space, but was still close to 1TB so he got only 60M inodes. Compared to our environments, as you pointed out, he's got a huge number of unique files that must be quite small. Out of curiosity, I checked some of our "primary storage", where we have a mix of lots (over 1Billion) of really small files and some large databases, and found we're using about 11117 inodes/GB Frédéric's environment appears to have an unusual file density, so it may be that he could not have expected that his system would be overrun by all the files. -- Ray Frush "Either you are part of the solution T:970.288.6223 or part of the precipitate." -*-*-*-*-*-*-*-*-*-*-*-*-*-*-*-*-*-*-*-*-*-*-*-*-*-*-*-*-*-*-*- Avago Technologies, Inc. | Technical Computing | IT Engineer |
From: Ray F. <ray...@av...> - 2012-10-05 15:38:39
|
I can't math today, I have the dumb... On Fri, Oct 5, 2012 at 9:24 AM, Ray Frush <ray...@av...> wrote: > Out of curiosity, I checked some of our "primary storage", where we > have a mix of lots (over 1Billion) of really small files and some > large databases, and found we're using about 11117 inodes/GB Our primary storage example has a total of over 1B inodes, which I used in the calculation above. We're only using 17% of them, or 1918 inodes/GB. I read off the wrong column. -- Ray Frush "Either you are part of the solution T:970.288.6223 or part of the precipitate." -*-*-*-*-*-*-*-*-*-*-*-*-*-*-*-*-*-*-*-*-*-*-*-*-*-*-*-*-*-*-*- Avago Technologies, Inc. | Technical Computing | IT Engineer |
From: Frederic M. <fre...@ju...> - 2012-10-07 18:16:18
|
Le 04/10/2012 20:56, Michael Stowe a écrit : > >> The inode number of the ext4 is static. >> >> - How can I do to increase the number of inodes? > > The number of ext4 inodes are set when the ext4 volume is created, so, you > have to recreate the file system. Perhaps using an alternative to ext4. I thought resize2fs also increased the number of inodes. damn! The file system has the option "resize_inode" is that it can help to increase the size or number of inode? >> - I can replace one by one the 500GB drives with 1TB drive, increase the >> logical volume for Backuppc and the file system ? > > That won't help. This is from the mkfs.ext4 man page: > > "Be warned that it is not possible to expand the number of inodes on a > filesystem after it is created, so be careful deciding the correct value > for this parameter." So I must create a second logical volume, format it with a smaller inode_ratio as 4096, and copy the Backuppc data. After, I could format the old logical volume for re-use. So, I'll replace the disk one by one by disks of 1TB For RAID10 with MD I have two choices, I can increase the size of partitions for each disk and therefore the array, or create a second partition on each disk and a second array in RAID 10. Is I can increase the size of an RAID 10 array ? Manpage only talks about RAID 1/4/5/6. Backuppc uses hard links which are restricted to a file system. I do not know between what directory are hard links. Is I can mount the "cpool" directory on a logical volume and the "pc" directory on another? Thank you for your answers. -- ============================================== | FRÉDÉRIC MASSOT | | http://www.juliana-multimedia.com | | mailto:fre...@ju... | | +33.(0)2.97.54.77.94 +33.(0)6.67.19.95.69 | ===========================Debian=GNU/Linux=== |
From: Frédéric M. <fre...@ju...> - 2012-10-08 09:08:33
|
Le 07/10/2012 23:56, Michael Stowe a écrit : > >> The file system has the option "resize_inode" is that it can help to >> increase the size or number of inode? > > resize_inode is a flag you can set when you first create the filesystem, > that makes it easier to expand the file system later. Again, "when you > first create..." so ... no. The file system, with zero inode free, was created with the "resize_inode" flag. It makes it easier to increase the size of the file system, but it does not increase the number of inodes, is that correct? >> So I must create a second logical volume, format it with a smaller >> inode_ratio as 4096, and copy the Backuppc data. After, I could format >> the old logical volume for re-use. > > I'm not following you, but to be clear: you're not getting any more > inodes in that filesystem. You need a new filesystem. OK, to create a new file system I will replace one by one the 4 hards drives. >> So, I'll replace the disk one by one by disks of 1TB >> >> For RAID10 with MD I have two choices, I can increase the size of >> partitions for each disk and therefore the array, or create a second >> partition on each disk and a second array in RAID 10. >> >> Is I can increase the size of an RAID 10 array ? Manpage only talks >> about RAID 1/4/5/6. > > There's a good reason for that. It's RAID0+1, so it behaves like 0 on top > of 1 (or 1E, in some cases.) At any rate, you should be able to expand it > using pairs of identical drives, as I recall. YMMV. In the doc, to increase the size of an array, there is a difference between adding a new partition (limited to raid 1/4/5/6) and increase the size of the partitions that seems work for all RAID. https://raid.wiki.kernel.org/index.php/Growing#Expanding_existing_partitions From what I understand changing the hard one to one it should work. :o) > I'm not even going to ask why you're going with RAID10 and ext4, but > neither of these would be high on my list. I always prefer RAID 10 to RAID 5 for safety reasons. For the first BackupPC server I had installed, I used RAID 5 for more space compared to a RAID 10. I was soon disappointed by the performance. Since I only use RAID 10. For the file system, I use the more standard systems used with Linux. But since the problem of inode numbers, I should perhaps reconsider my choice for other file system. >> Backuppc uses hard links which are restricted to a file system. I do not >> know between what directory are hard links. >> >> Is I can mount the "cpool" directory on a logical volume and the "pc" >> directory on another? > > No. After moving the BackupPC data on the new logical volume and thus the new file system, the old logical volume will no longer be used. I could delete it but how I could use this free space? Does with XFS the inode number increases with increasing file system size? XFS may be a better solution for me and use the free space. Some people use XFS on Debian without problem? Regards. -- ============================================== | FRÉDÉRIC MASSOT | | http://www.juliana-multimedia.com | | mailto:fre...@ju... | | +33.(0)2.97.54.77.94 +33.(0)6.67.19.95.69 | ===========================Debian=GNU/Linux=== |
From: Carl W. S. <ch...@re...> - 2012-10-08 11:47:52
|
On 10/08 11:07 , Frédéric Massot wrote: > After moving the BackupPC data on the new logical volume and thus the > new file system, the old logical volume will no longer be used. I could > delete it but how I could use this free space? Expand your new volume and filesystem to use it. Are you using LVM, or just plain partitions? > Does with XFS the inode number increases with increasing file system size? XFS doesn't really have a problem with inodes. > Some people use XFS on Debian without problem? I've used it for some years. If you do use XFS, make sure you have enough space in RAM+swap to accomodate the xfs_check tool, which is notoriously memory-hungry. My suggested filesystem layout is something like: 30GB /, using ext4 10GB swap remainder of space in a separate partition mounted on /var/lib/backuppc. I've found I needed a good 5GB or more swapfile to accomodate fscking a 9TB filesystem; but that's only the rougest metric. So I'll suggest 10GB of swap. It seems like a horrible waste of space since most will never be used; but compared to a multi-TB filesystem its vanishingly small, and it simplifies the process when you fsck your XFS filesystem (which shouldn't be necessary; but hardware does fail, and when you have hardware errors, sometimes you need to fsck). -- Carl Soderstrom Systems Administrator Real-Time Enterprises www.real-time.com |
From: Frédéric M. <fre...@ju...> - 2012-10-08 13:02:07
|
Le 08/10/2012 13:47, Carl Wilhelm Soderstrom a écrit : > On 10/08 11:07 , Frédéric Massot wrote: >> After moving the BackupPC data on the new logical volume and thus the >> new file system, the old logical volume will no longer be used. I could >> delete it but how I could use this free space? > > Expand your new volume and filesystem to use it. > Are you using LVM, or just plain partitions? Yes, I use LVM on MD. I thought I would increase the size of the new file system, but my concern is not having the same problem with a lack of inode in a few years. From what I've read, if I chose XFS instead of ext4, I would not have this problem of lack of inode. >> Does with XFS the inode number increases with increasing file system size? > > XFS doesn't really have a problem with inodes. > >> Some people use XFS on Debian without problem? > > I've used it for some years. If you do use XFS, make sure you have enough > space in RAM+swap to accomodate the xfs_check tool, which is notoriously > memory-hungry. My suggested filesystem layout is something like: > > 30GB /, using ext4 10 Gb for /, using ext4, 26% used 4 GB RAM, I'll add more RAM for XFS > 10GB swap 4 GB swap on logical volume, with new disks I will increase the size of the swap. > remainder of space in a separate partition mounted on /var/lib/backuppc. BackupPC data are already in their own logical volume. :o) > I've found I needed a good 5GB or more swapfile to accomodate fscking a 9TB > filesystem; but that's only the rougest metric. So I'll suggest 10GB of > swap. It seems like a horrible waste of space since most will never be used; > but compared to a multi-TB filesystem its vanishingly small, and it > simplifies the process when you fsck your XFS filesystem (which shouldn't be > necessary; but hardware does fail, and when you have hardware errors, > sometimes you need to fsck). Thank you for the rule of thumb. -- ============================================== | FRÉDÉRIC MASSOT | | http://www.juliana-multimedia.com | | mailto:fre...@ju... | | +33.(0)2.97.54.77.94 +33.(0)6.67.19.95.69 | ===========================Debian=GNU/Linux=== |
From: Carl W. S. <ch...@re...> - 2012-10-08 15:06:40
|
On 10/08 03:01 , Frédéric Massot wrote: > I thought I would increase the size of the new file system, but my > concern is not having the same problem with a lack of inode in a few years. > > From what I've read, if I chose XFS instead of ext4, I would not have > this problem of lack of inode. Correct. > 10 Gb for /, using ext4, 26% used > > 4 GB RAM, I'll add more RAM for XFS > > > 10GB swap > > 4 GB swap on logical volume, with new disks I will increase the size of > the swap. > > > remainder of space in a separate partition mounted on /var/lib/backuppc. > > BackupPC data are already in their own logical volume. :o) Sounds good. -- Carl Soderstrom Systems Administrator Real-Time Enterprises www.real-time.com |