Thanks for the data point :-)
I'm much more concerned with why my journal is failing with rc=-231 than I am with why the full fsck takes a long time... I'm running on very inexpensive hardware and don't get anywhere near the disk performance Steve and Sandon get. I'm luck to get 90 MB/sec sequential reads/writes. This is by design since our application is very cost sensitive and higher performance is not necessary. If you're curious about our hardware setup, checkout the following blog post:
I don't have it any more as I ran into a problem with JFS at 32TiB where it would not check/mount/handle 32TiB+ volumes so had to switch over to XFS.
Did JFS completely fail at 32 TiB or were you seeing journal problems like I'm seeing? Have you been happy with XFS?
On Jan 6, 2010, at 1:35 PM, Steve Costaras wrote:
Don't think it would be volume size itself.
I have had a couple ~21TiB JFS volumes here with about 4.5million files
and they checked under an hour. Now this was on a hardware array
(raid-60, with 33 drives (4 8drive raid 6's, lvm stripped).
I don't have it any more as I ran into a problem with JFS at 32TiB
where it would not check/mount/handle 32TiB+ volumes so had to switch
over to XFS.
On 01/06/2010 00:50, Tim Nufire wrote:
Thanks for the response :-) I'm guessing the volume size is the problem here
because I think my journals stopped working when I switched from 1 to 1.5 TB
drives (12T -> 18T volumes).
The 8 hour fsck amazes me though, how many inodes are you running on that thing?
Here are the stats for the volume where fsck took 8h 37m...
Filesystem Inodes IUsed IFree IUse%
/dev/md12 4,294,967,295 11,520,071 4,283,447,224 1%
My fsck takes only ~10 minutes with 6 mililion inodes (I think the inode count
effects it the most).
Either you're getting better disk performance than I am or some other factor effects
fsck run time.. How many files do you have? The volume that took 8+ hours has:
19027657792 kilobytes total disk space.
5601979 kilobytes in 513655 directories.
14702061342 kilobytes in 11006413 user files.
0 kilobytes in extended attributes
0 kilobytes in access control lists
14181225 kilobytes reserved for system use.
4317017204 kilobytes are available for use.
PS: multi-petabyte data farm? JFS? software raid? backblaze?
Yep, that's us :-)
This SF.Net email is sponsored by the Verizon Developer Community
Take advantage of Verizon's best-in-class app development support
A streamlined, 14 day to market process makes app distribution fast and easy
Join now and get one step closer to millions of Verizon customers
Jfs-discussion mailing list