jfs-discussion Mailing List for Journaled File System (Page 184)
Brought to you by:
blaschke-oss,
shaggyk
You can subscribe to this list here.
2001 |
Jan
|
Feb
|
Mar
|
Apr
(35) |
May
(47) |
Jun
(67) |
Jul
(147) |
Aug
(58) |
Sep
(65) |
Oct
(84) |
Nov
(34) |
Dec
(53) |
---|---|---|---|---|---|---|---|---|---|---|---|---|
2002 |
Jan
(89) |
Feb
(66) |
Mar
(79) |
Apr
(42) |
May
(22) |
Jun
(15) |
Jul
(51) |
Aug
(29) |
Sep
(38) |
Oct
(57) |
Nov
(30) |
Dec
(6) |
2003 |
Jan
(41) |
Feb
(19) |
Mar
(17) |
Apr
(14) |
May
(23) |
Jun
(17) |
Jul
(12) |
Aug
(8) |
Sep
(63) |
Oct
(14) |
Nov
(7) |
Dec
(15) |
2004 |
Jan
(31) |
Feb
(9) |
Mar
(72) |
Apr
(40) |
May
(38) |
Jun
(30) |
Jul
(62) |
Aug
(5) |
Sep
(51) |
Oct
(14) |
Nov
(25) |
Dec
(53) |
2005 |
Jan
(61) |
Feb
(22) |
Mar
(40) |
Apr
(37) |
May
(15) |
Jun
(53) |
Jul
(52) |
Aug
(40) |
Sep
(52) |
Oct
(51) |
Nov
(27) |
Dec
(12) |
2006 |
Jan
(24) |
Feb
(40) |
Mar
(8) |
Apr
(22) |
May
(12) |
Jun
(29) |
Jul
(33) |
Aug
(23) |
Sep
(19) |
Oct
(24) |
Nov
(28) |
Dec
(50) |
2007 |
Jan
(59) |
Feb
(21) |
Mar
(39) |
Apr
(16) |
May
(17) |
Jun
(27) |
Jul
(40) |
Aug
(62) |
Sep
(54) |
Oct
(62) |
Nov
(39) |
Dec
(28) |
2008 |
Jan
(34) |
Feb
(21) |
Mar
(59) |
Apr
(48) |
May
(45) |
Jun
(33) |
Jul
(40) |
Aug
(39) |
Sep
(46) |
Oct
(4) |
Nov
(1) |
Dec
(1) |
2009 |
Jan
(1) |
Feb
|
Mar
(10) |
Apr
(19) |
May
(12) |
Jun
(29) |
Jul
(44) |
Aug
(13) |
Sep
(19) |
Oct
(5) |
Nov
(10) |
Dec
(41) |
2010 |
Jan
(30) |
Feb
(22) |
Mar
(8) |
Apr
(10) |
May
(6) |
Jun
(17) |
Jul
(12) |
Aug
(2) |
Sep
(6) |
Oct
(13) |
Nov
(20) |
Dec
(6) |
2011 |
Jan
|
Feb
(4) |
Mar
(9) |
Apr
(22) |
May
(24) |
Jun
(13) |
Jul
(5) |
Aug
(5) |
Sep
(3) |
Oct
(3) |
Nov
(14) |
Dec
(23) |
2012 |
Jan
(1) |
Feb
|
Mar
(7) |
Apr
|
May
(10) |
Jun
(17) |
Jul
(36) |
Aug
(7) |
Sep
(17) |
Oct
(16) |
Nov
(7) |
Dec
(2) |
2013 |
Jan
(10) |
Feb
(2) |
Mar
(2) |
Apr
|
May
(23) |
Jun
(18) |
Jul
(5) |
Aug
(23) |
Sep
(5) |
Oct
(10) |
Nov
(19) |
Dec
(97) |
2014 |
Jan
(7) |
Feb
(9) |
Mar
(13) |
Apr
(10) |
May
(2) |
Jun
(2) |
Jul
(2) |
Aug
|
Sep
|
Oct
(107) |
Nov
(18) |
Dec
(5) |
2015 |
Jan
|
Feb
(10) |
Mar
(38) |
Apr
(18) |
May
(4) |
Jun
(3) |
Jul
(15) |
Aug
(5) |
Sep
(5) |
Oct
|
Nov
|
Dec
|
2016 |
Jan
|
Feb
(8) |
Mar
(8) |
Apr
(9) |
May
(2) |
Jun
(22) |
Jul
(5) |
Aug
(13) |
Sep
(2) |
Oct
(1) |
Nov
(4) |
Dec
(2) |
2017 |
Jan
(4) |
Feb
|
Mar
(5) |
Apr
(73) |
May
(98) |
Jun
(27) |
Jul
(21) |
Aug
(9) |
Sep
(3) |
Oct
(17) |
Nov
(6) |
Dec
(7) |
2018 |
Jan
(5) |
Feb
(3) |
Mar
|
Apr
(103) |
May
(64) |
Jun
(14) |
Jul
|
Aug
(19) |
Sep
(15) |
Oct
(3) |
Nov
(3) |
Dec
|
2019 |
Jan
(7) |
Feb
(1) |
Mar
(2) |
Apr
|
May
(5) |
Jun
(50) |
Jul
(23) |
Aug
(47) |
Sep
|
Oct
(4) |
Nov
(3) |
Dec
|
2020 |
Jan
(6) |
Feb
(2) |
Mar
(2) |
Apr
(3) |
May
(28) |
Jun
(3) |
Jul
(5) |
Aug
(3) |
Sep
(35) |
Oct
(4) |
Nov
(8) |
Dec
(21) |
2021 |
Jan
(95) |
Feb
(22) |
Mar
(19) |
Apr
|
May
(1) |
Jun
(10) |
Jul
(16) |
Aug
(58) |
Sep
(8) |
Oct
(182) |
Nov
(2) |
Dec
(1) |
2022 |
Jan
|
Feb
|
Mar
(11) |
Apr
(161) |
May
(28) |
Jun
(35) |
Jul
(1) |
Aug
(5) |
Sep
(25) |
Oct
(78) |
Nov
(44) |
Dec
(56) |
2023 |
Jan
(46) |
Feb
(11) |
Mar
(100) |
Apr
(56) |
May
(117) |
Jun
(84) |
Jul
(93) |
Aug
(51) |
Sep
(68) |
Oct
(79) |
Nov
(44) |
Dec
(12) |
2024 |
Jan
(86) |
Feb
(40) |
Mar
(18) |
Apr
(50) |
May
(18) |
Jun
(18) |
Jul
(33) |
Aug
(25) |
Sep
(11) |
Oct
|
Nov
|
Dec
|
From: <pg...@jf...> - 2005-11-16 22:08:45
|
>>> On Tue, 15 Nov 2005 17:40:30 -0600, Dave Kleikamp >>> <sh...@au...> said: [ ... JFS papers in PDF missing ... ] shaggy> I found the pdf files on www6.software.ibm.com, so I shaggy> have changed the links to point to the them rather than shaggy> the html. You're right. They are much more readable, shaggy> even if the links in the html still worked. :-) >> [ ... ] written so far a summary of the JFS layout paper, >> here: http://WWW.sabi.co.UK/Notes/linuxFS.html#jfsStruct [ >> ... ] shaggy> I haven't reviewed this yet, but I'll try to get to it shaggy> before I forget. Don't worry, there is no hurry. My general attitude to JFS is that it is mature and good, so there is little urgency, just second order polishing. shaggy> Do you mind if I link to this from the documents section shaggy> of the web page? Fine. |
From: Subhathra S. <sub...@in...> - 2005-11-16 10:28:17
|
I will be out of the office starting 11/10/2005 and will not return until 11/17/2005. Please contact my backup K N Rajesh/India/IBM for any urgent issues. |
From: Dave K. <sh...@au...> - 2005-11-15 23:40:50
|
On Thu, 2005-11-03 at 02:15 +0000, Peter Grandi wrote: > Hi I have recently noticed that the DeveloperWorks site has got > reorganized over the years and the JFS papers as a result have > gotten a bit broken, as in there are now only the HTML versions, > and the links to the illustrations seem wrong. > > So I have done a bit of searching hoping to find surviving > copies on the net, and I found much more readable copies of the > PDF versions, with the images inline, inside the SUSE 'jfsutils' > RPM, for example inside this RPM: > > ftp://FTP.SUSE.com/pub/suse/i386/9.3/suse/src/jfsutils-1.1.7-5.src.rpm > > Perhaps those PDFs should extracted from that RPM and put > somewhere on the JFS page on SourceForge, as they still seem > largely current (and much more readable than the broken HTML). Okay. I'm sorry it's taken me so long to act on this. I found the pdf files on www6.software.ibm.com, so I have changed the links to point to the them rather than the html. You're right. They are much more readable, even if the links in the html still worked. :-) > I have also started moving my notes on Linux filesystems and JFS > in particular, and I have written so far a summary of the JFS > layout paper, here: > > http://WWW.sabi.co.UK/Notes/linuxFS.html#jfsStruct > > It may not be a faithful summary as some bits of the JFS layout > papers seemed somewhat ambiguous to me, and I have a slightly > different way of looking at things (and it is not that proofread > yet, especially the intralinks). I haven't reviewed this yet, but I'll try to get to it before I forget. Do you mind if I link to this from the documents section of the web page? -- David Kleikamp IBM Linux Technology Center |
From: Dave K. <sh...@au...> - 2005-11-15 20:11:51
|
On Tue, 2005-11-15 at 14:33 -0500, Guerra, Jim wrote: > after running a system for some time it crashed and rebooted itself. > we are running JFS on the root partition and the fsck showed that the > filesystem is OK. > there is however a file in a directory that we have no permission to. > ls -al returns the filename : permission denied. I assume you're getting this error as root. > I am able to mv the parent directory to another name but not to a > different filesystem. > What can we do to remove the file ? and any explaination of the > cause would be helpful Does the filename look correct? If not, the directory entry may have a character that has a non-zero high-order byte (file names are stored in 16-bit unicode). In that case, mounting with -oiocharset=utf8 may let you access the file correctly, and then either rename or delete it. Otherwise, is there anything in the system log (dmesg)? What kernel are you running? > Jim Guerra > Systems Engineer > NewsBank, INC > > jg...@ne... > -- David Kleikamp IBM Linux Technology Center |
From: Guerra, J. <jg...@ne...> - 2005-11-15 19:33:32
|
after running a system for some time it crashed and rebooted itself. we are running JFS on the root partition and the fsck showed that the = filesystem is OK. there is however a file in a directory that we have no permission to. = ls -al returns the filename : permission denied. I am able to mv the parent directory to another name but not to a = different filesystem.=20 What can we do to remove the file ? and any explaination of the cause = would be helpful Jim Guerra Systems Engineer=20 NewsBank, INC jg...@ne... |
From: Can S. <cs...@st...> - 2005-11-03 13:14:51
|
Hi, My research group is working on a tool that automatically finds bugs in FS mounting code by mount a "symbolic" disk. Our approach carefully (automatically) looks at the checks the FS does on the disk and sees if certain checks that prevent potential errors are missing. We have found two Null pointer dereference bugs caused by the following disks. Both disks are only 64k long but the crashes happen nonetheless. I have only given the actual bugs a cursory look but they seem pretty complicated. Bug 1: jfs_metapage.c: http://keeda.stanford.edu/jfs_mount.bug.disk Bug 2: jfs_mount.c: http://keeda.stanford.edu/jfs_metapage.bug.disk I hope these bug reports help. We are still in the early stages of using our tool and hope to provide you with more bug reports in the future. Regards, Can Sar |
From: <pg...@jf...> - 2005-11-03 02:15:40
|
Hi I have recently noticed that the DeveloperWorks site has got reorganized over the years and the JFS papers as a result have gotten a bit broken, as in there are now only the HTML versions, and the links to the illustrations seem wrong. So I have done a bit of searching hoping to find surviving copies on the net, and I found much more readable copies of the PDF versions, with the images inline, inside the SUSE 'jfsutils' RPM, for example inside this RPM: ftp://FTP.SUSE.com/pub/suse/i386/9.3/suse/src/jfsutils-1.1.7-5.src.rpm Perhaps those PDFs should extracted from that RPM and put somewhere on the JFS page on SourceForge, as they still seem largely current (and much more readable than the broken HTML). I have also started moving my notes on Linux filesystems and JFS in particular, and I have written so far a summary of the JFS layout paper, here: http://WWW.sabi.co.UK/Notes/linuxFS.html#jfsStruct It may not be a faithful summary as some bits of the JFS layout papers seemed somewhat ambiguous to me, and I have a slightly different way of looking at things (and it is not that proofread yet, especially the intralinks). |
From: ±±¾©263ÍøÂçͨÐŹ«Ë¾ÉîÛÚ·þÎñÖÐÐÄ <da...@21...> - 2005-11-03 01:16:59
|
MjYzRVTTxdbKzfjC57Xnu7A2MNSqL9TCsPzUwiy5+sTas6TNvivK0LuwyM7S4rTyLg0KDQq5zLao u7C7+jUwMNSqLL/J0sa2r1VTQruwu/oyNTDUqiyz9dewt9ExMDDUqi7Wu9Kq09C/7bT4vs3E3New Lg0KDQqwstewtcS1sdTCw+K30cq508MstM7UwrW9MjAwNcTqMTLUwjYw1Kov1MIsMjAwNsTqxvA5 ONSqL9TCLiANCg0KyMPE+r6hx+m1xNPrxPq1xL/Nu6ehosfX09G9u8y4o6yyu9TZudjXoiLNqLuw vMfKsSKjoQ0KDQpFVM34wue157uwy/y1xLLZ1/e6zc2ou7DWysG/0+u0q82ztcS157uwstnX97e9 yr3Su8Sj0rvR+aOs1rvSqtPQv+20+M34z9+jrMqjz8K1xM7Sw8fIq7K/zqrE+rjjteCjoQ0KDQoN CiAgICAgICAgICAgICAgICAgILGxvqkyNjPN+MLnzajQxbmry77J7tvat/7O8dbQ0MQNCg0KICAg ICAgICAgICAgICAgICAgwarPtbXnu7A6ICAwNzU1LTgzODU1NTEwDQo= |
From: <pg...@jf...> - 2005-11-02 17:10:05
|
>>> On Wed, 2 Nov 2005 13:33:02 -0000, "Max Eaves" >>> <max...@re...> said: max.eaves> [ ... ] Dell PoweRaid RAID array, operating on RAID max.eaves> 5 across 11 discs. http://WWW.BAARF.com/ max.eaves> The problem is that jfs drives works correct for a max.eaves> week or so, and then suddenly we start getting I/O max.eaves> errors on both arrays. Which ones? max.eaves> When we try and remount the drives (firstly max.eaves> unmounting them and remounting using mount -o max.eaves> remount) or restarting the servers, we are getting max.eaves> I/O Superblock errors; [ ... ] Which ones? max.eaves> [ ... ] Dell's diagnostic discs. No hardware fault max.eaves> has been found, [ ... ] Perhaps with the electronics in the RAID host adapter... Using my dowsing rod :-), it seems most likely that there is some (multiple, likely) hardware problem with some of the discs, especially given that there are 11 of them. I would check whether the arrays are running in damaged mode. Otherwise, SLES is a well supported product, so it is likely that the SUSE issue database would have some entry referring to sw issues with JFS on 64 bit in 2.6.5 so perhaps a search may be in order. |
From: <pg...@jf...> - 2005-11-02 17:09:56
|
>>> On Tue, 1 Nov 2005 08:22:52 -0500, JB...@ha... said: [ ... ] JBorn> /dev/sda1 type ext3 JBorn> /dev/sda2 type ext3 JBorn> /dev/sda3 type swap JBorn> /dev/sda4 type extended JBorn> /dev/sda5 free That looks good. JBorn> In the background I keep getting the following messages JBorn> while running qtparted: JBorn> Error: Filesystem was not cleanly unmounted! You should JBorn> e2fsck. Modifying an unclean filesystem could cause JBorn> severe corruption. Error: Could not detect file system. Both are fine -- unchecked filesystems will be mounted ro, and probably some of those partitions ('sda5') don't have a filesystem in them. But 'qtparted' is complaining here as to =ABnot cleanly unmounted=BB simply because '/dev/sda[12]' are most likely =5Fmounted=5F as you are running it on an active system. This is usually safe as long as you don't mess around with those moujnted partition. JBorn> Should I be running some sort of file system check JBorn> weekly=3F Well, many file systems have a mount count/interval, and will run 'fsck' every now and then automatically. JBorn> I believe I want to create a folder /video or /var/video JBorn> before I use mkfs.jfs. What is the proper location for a JBorn> video directory=3F That all depends on which naming conventions you like. A file tree is a classification mechanism for files, and how people like to classify files is mostly a personal preference. I personally give proper names to all my filesystems and and mount them all under '/v' (my own convention), so my file trees are called something "/v/sugar" "/v/salt" (or whatever other naming scheme one likes) ; the FSSTND usually recommends '/media' as the mount point directory, and many distributionas then use the partition name as the mountpoint name, resulting in '/media/sda5'. Then it is easy to use those mount point names directly or to ''graft'' them onto the filesystem tree at any point using symbolic links or 'mount --bind'. The reason I prefer proper names for filesystems is that in that way I can move them around if needed without changing mount point name; it would be terribly confusing I think if one moved the contents of '/dev/sda5' to say '/dev/hdd6' and then mounted '/dev/hdd6' as '/media/sda5' for backwards compatibility. Anyhow often one would not want to use mountpoint paths directly, but use symbolic links or 'mount --bind'. JBorn> and do I create this before running mkfs.jfs=3F Just create a directory. JBorn> Now I want /dev/sda5 to be formatted with jfs what is the JBorn> mkfs.jfs to run=3F Well 'jfs=5Fmkfs /dev/sda5', or 'js=5Fmkfs -L NAME /dev/sda5' if you want to give the filesystem a name. JBorn> Where does mount/umount come into the picture=3F As to this, it would be somewhat useful to read a discussion of how UNIX/POSIX/Linux systems handle partitions, filesystems and mounting. There are very many introductory tutorials and HOWTOs on disc handling under UNIX/POSIX/Linux, often as chapters in system administration books/tutorials. This one seems rather appropriate: http://WWW.TLDP.org/LDP/Linux-Filesystem-Hierarchy/html/Linux-Filesys= tem-Hierarchy.html JBorn> Since this is a video partition I want to make sure I can JBorn> add more drives later and just add space to this logical JBorn> volume=3F It is too late (except for some hard and hazardous work) to add a volume manager underneath. This is not too bad as I think that except in very very few cases logical volume managers are largely pointless, because under UNIX like systems one deals with file trees, and volumes are just slightly annoying containers for those. The UNIX like naming trees are/can be almost totally independent of how the tree is partitioned into volumes, thanks to things like symbolic links or the Linux 'mount --bind' extension. Except in special cases (like the root filesystem, or sometimes '/var/spool', or FAT32 partitions) one might as well create a single large partition per disk. People with a MS Windows mindset often still think in terms of drive letters and therefore volumes, even if ''mounting'' is a way similar to UNIX like system has been available for a long time even under MS Windows. JBorn> is the /dev/sda4 of type extended a logical volumn=3F No, extended partitions are just a hack to add more partitions than the original partition table design allowed. |
From: Dave K. <sh...@au...> - 2005-11-02 14:32:47
|
On Tue, 2005-11-01 at 18:49 +0000, Peter Grandi wrote: > Hi, I have recently (six weeks ago) switched all the filesystems > on my PC from 'ext3' to JFS for various reasons, in part because > I like the JFS design, in part because I had discovered that > over the past several months my 'ext3' ''root'' filesystem > performance had degraded by 7 times over that of a freshly > loaded copy of itself. > > So I have decided to compare the read speed of my ''root'' JFS > filesystem as it is now, after six weeks of in-place package > upgrades, and how it would be if I reloaded it. Great. I have never had this kind of data. > I have an otherwise quiescent disc, and I copied my ''root'' > filesystem to it first as a partition image, to preserve the > ''used'' layout, and then by 'tar', so that it would be reloaded > in an optimal ''new'' layout. The filesystem contains around > 7.5GiB of data in 360k files, and 2.3GiB are free. > > The result is that a whole-filesystem 'tar c' on the ''new'' > layout takes 10min., on the ''used'' layout takes 26min., which > is a factor of over 2.5 times longer. I have also done some spot > checks on some largish files that I know have been regularly > updated/rewritten, and there are similar slowdowns (no slowdown > on files that have not been updated in the past six weeks). > > Details here: http://WWW.sabi.co.UK/Notes/anno05-4th.html#051101 This makes a great case for implementing a defragmenter. I'd put it off in the past for a few reasons. 1) there were always more urgent things that needed to be done, 2) the defrag tool that ran on OS/2 was very limited in what it did, and I didn't think that porting it alone would be sufficient, 3) I didn't have any data to demonstrate that fragmentation was a problem. (I noticed the comment about sync adding 4 minutes due to the modified atimes. Have you considered mounting with the noatime option?) > Now 2.5 times is a lot better than 7 times (but the latter was > over a rather longer period on a filesystem with less free space), > but it is still somewhat disappointing, as the average transfer > rate for reading the whole filesystem goes down (on a disc > capable of around 35MiB/s sustained in optimal single-large-file > conditions) from 12MiB/s, which is reasonable, to 5MiB/s, which > is not awesome. Agreed. I still don't anticipate having the time to work on a defragmenter in the near future. This would be a good project from someone wanting to contribute something to jfs. I'd be happy to help. -- David Kleikamp IBM Linux Technology Center |
From: Max E. <max...@re...> - 2005-11-02 13:33:15
|
Hi there, Hopefully you will be able to help with a problem (or at least point me at fixing) that I am suffering with my jfs format drives. We are running a two jfs drives on Dell Poweredge 2850 machines. The operating system is SuSE Linux Enterprise 9.1 x_64 using a 2.6.5 kernel. Our jfs drives are a couple of Dell PoweRaid RAID array, operating on RAID 5 across 11 discs. The RAID is setup in hardware, not software, so in effect the server sees two logical drives (which matches each of the two RAID arrays). The RAID is approximately 980G (just short of 1TB), which I assume that jfs should have no problems in addressing. The SCSI card is a Dell PERC 4s/i card with two ports, going to each of the RAID arrays. Fdisk doesn't seem to have a partition for jfs, so x083 (Linux) has been used. According to the fdisk manual, x035 (unknown) is jfs, but it seems that this partition is no longer supported by the version of fdisk on this kernel. =20 The problem is that jfs drives works correct for a week or so, and then suddenly we start getting I/O errors on both arrays. When we try and remount the drives (firstly unmounting them and remounting using mount -o remount) or restarting the servers, we are getting I/O Superblock errors; the drive then becomes unreadable. This is the 2nd time this has occurred, so we have now done a hardware audit using Dell's diagnostic discs. No hardware fault has been found, so I am coming back to the conclusion that there is a problem with the format or the way that jfs is handled by the Dell 2850 Poweredge / PERC 4si card. I don't really want to reformat the drives again. I can't find any reference to this problem, and feel loathed to going back to ext3. Best Wishes Max Eaves Max Eaves | System Specialist MOMS >=20 > Red Bee Media formerly BBC Broadcast > ECA | Broadcast Centre | 201 Wood Lane | London W12 7TP=20 > T: +44 (0) 208 00 83491 extension > M: +44 (0) 7973 870275=20 > E: mailto:max...@re...=20 > http://www.redbeemedia.com >=20 http://www.bbc.co.uk/ This e-mail (and any attachments) is confidential and may contain personal views which are not the views of the BBC unless specifically stated. If you have received it in error, please delete it from your system.=20 Do not use, copy or disclose the information in any way nor act in reliance on it and notify the sender immediately. Please note that the BBC monitors e-mails sent or received.=20 Further communication will signify your consent to this. |
From: <pg...@jf...> - 2005-11-01 18:50:12
|
Hi, I have recently (six weeks ago) switched all the filesystems on my PC from 'ext3' to JFS for various reasons, in part because I like the JFS design, in part because I had discovered that over the past several months my 'ext3' ''root'' filesystem performance had degraded by 7 times over that of a freshly loaded copy of itself. So I have decided to compare the read speed of my ''root'' JFS filesystem as it is now, after six weeks of in-place package upgrades, and how it would be if I reloaded it. I have an otherwise quiescent disc, and I copied my ''root'' filesystem to it first as a partition image, to preserve the ''used'' layout, and then by 'tar', so that it would be reloaded in an optimal ''new'' layout. The filesystem contains around 7.5GiB of data in 360k files, and 2.3GiB are free. The result is that a whole-filesystem 'tar c' on the ''new'' layout takes 10min., on the ''used'' layout takes 26min., which is a factor of over 2.5 times longer. I have also done some spot checks on some largish files that I know have been regularly updated/rewritten, and there are similar slowdowns (no slowdown on files that have not been updated in the past six weeks). Details here: http://WWW.sabi.co.UK/Notes/anno05-4th.html#051101 Now 2.5 times is a lot better than 7 times (but the latter was over a rather longer period on a filesystem with less free space), but it is still somewhat disappointing, as the average transfer rate for reading the whole filesystem goes down (on a disc capable of around 35MiB/s sustained in optimal single-large-file conditions) from 12MiB/s, which is reasonable, to 5MiB/s, which is not awesome. |
From: <JB...@ha...> - 2005-11-01 13:23:02
|
Thank you for the information Peter, it has gotten me a little - scratch that - a lot further. Well I managed to get qtparted install and I can now "see" what I did via Disk Druid I have /dev/sda1 type ext3 /dev/sda2 type ext3 /dev/sda3 type swap /dev/sda4 type extended /dev/sda5 free In the background I keep getting the following messages while running qtparted: Error: Filesystem was not cleanly unmounted! You should e2fsck. Modifying an unclean filesystem could cause severe corruption. Error: Could not detect file system. These sound very important, but in reading up on qtparted these messages might be because I'm running an x86_64 installation. Any idea? Should I be running some sort of file system check weekly? What is e2fsck.. When I run it suppressing questions I still get the same message in qtparted, and e2fsck does not tell me it did anything. I believe I want to create a folder /video or /var/video before I use mkfs.jfs. What is the proper location for a video directory? and do I create this before running mkfs.jfs? Now I want /dev/sda5 to be formatted with jfs what is the mkfs.jfs to run? Where does mount/umount come into the picture? Since this is a video partition I want to make sure I can add more drives later and just add space to this logical volume? is the /dev/sda4 of type extended a logical volumn? Thanks, jb |
From: Dave K. <sh...@au...> - 2005-10-31 22:13:40
|
On Sat, 2005-10-29 at 11:17 -0700, Chris Spiegel wrote: > Hi, > > When a new hard link is created inside of a directory, that directory's mtime > is not updated as it would be in most other circumstances. This behavior > exists in at least the latest Linux kernel, 2.6.14. This has been broken forever. It's surprising that nobody has pointed this out before. I notice that the same is true when creating a symlink. I'll fix it. > Thanks, > Chris Thank you! Shaggy -- David Kleikamp IBM Linux Technology Center |
From: <pg...@jf...> - 2005-10-31 16:23:02
|
>>> On Sun, 30 Oct 2005 11:22:39 -0500, "Jeffrey R. Born" >>> <jb...@ch...> said: jborn> [ ... ] Used Disk Druid to get the initial partitions in jborn> place and though I created a LVM but didn't format it. jborn> Nowhere did Disk Druid ask me if I wanted to use JFS. Well, thats like it is, unfortunately; while Fedora supports JFS, its _installer_ does not support it in its wizard. One can still install to JFS, but one has to choose ''expert mode'' and create partitions and filesystems manually instead of using Disk Druid. Once the 'root' partition and filesystem is created one can use tools like 'qtparted' to add other partitions and filesystems, for example of JFS type. [ ... ] jborn> First issue df -k does not list this logical partition. 'df' lists the _mounted_ filesystems (and only those of the mounted filesystems that are listed in '/etc/mtab', which may not list all). Partitions are sections of disk that may (or may not) contain a filesystem (conceivably a partition may contain more than one, but this would be extraordinarily weird), which is instead a collection of files/directories. Then even if a partition contains a filesystem, it may or may not be mounted (which more or less means ''activated''). Filesystems may also be inside things that are not partitions, like a whole CD or DVD (filesystems of type 'iso9660' or 'udf' usually), or even inside files (mountable with '-o loop'). jborn> How do I show the partitions after I have Linux installed? The file '/proc/partitions' lists all the partitions the kernel knows about currently (basically all those on discs the kernel is aware of), whether or not they contain filesystems. You can check that a partition contains a JFS filesystem with something like 'jfs_tune -l /dev/<whatever>'. jborn> How can I verify that my installation of FC4 has JFS jborn> support? Does this have to be compiled in? Well, JFS support is mostly part of the kernel, and all modern kernels have it compiled in, and I occasionally use Fedora 4 and its kernel definitely has JFS support. jborn> With it installed/not installed can I use yum to get the jborn> latest JFS/JFSUtils? If so where is the repo? The JFS utilities are part of the standard Fedora Core set, and the package name for Fedora 4 is "jfsutils-1.1.7-2.i386.rpm". |
From: <pg...@jf...> - 2005-10-30 23:05:19
|
>>> On Wed, 26 Oct 2005 11:05:40 -0500, Dave Kleikamp >>> <sh...@au...> said: BTW, in this discussion if we were in the same room and with a blackboard for doing a couple of pictures I could get across my guesses/points a lot easier and quicker and with less repetition. This is such a narrow bandwidth medium... Oh well :-/. [ ... ] >> Yes, this is more or less what I was expecting (that is, it >> obviously first creates 32KiB extents and then coalesces them >> after writing them). shaggy> It doesn't work that way today. The blocks are actually shaggy> allocated one page at a time and the extent is grown shaggy> with each new allocation. [ ... ] Uh it is =5Freally=5F block-at-a-time. My previous understanding was that it was a buddy allocator with a block-scoring bitmap, but it seems it is instead really a bitmap allocator with a tree index; or perhaps not... [ ... ] shaggy> Currently holes can exist either in the middle or at the shaggy> end of a file. If there is no phyisical block mapped to shaggy> a logical block of a file, it is read as zeros. I only shaggy> suggested the ABNR extents as a way of preallocating shaggy> contiguous space for the holes, since I thought that was shaggy> what you were asking for. Yes, but the unwritten bit at the end does not need a special extent type, it can be part of an existing extent, because that it is unwritten-but-zero is implied in its position, which is not really the case for a hole in the middle of a file. The scheme some people have been thinking of is to have =5Fthree=5F ''file sizes'', in order of increasing (or same) value: * max bytes written; anything beyond this reads as zeroes. * max bytes readable; anything beyond this is not readable. * bytes actually allocated. For an empty but preallocated file of size N, it could be a single extent of size N. Then the three sizes would be initially like 0:0:N. Then a seek to the end would make them 0:N:N, and a write of 4KiB would make them 4096:N:N for example. shaggy> [ ... ] Would we need a mechanism to free unused shaggy> preallocations=3F >> [ ... ] Also, preallocations can be ephemeral (disappear on >> close) or persistent. [ ... ] shaggy> If preallocations are freed at file close, I'm not sure shaggy> there's an advantage over the current behavior where jfs shaggy> locks out other allocations in the AG until the file is shaggy> closed. Ahhhh, I can see some advantages, e.g. if the free list is fragmented. In part because one no longer need to scatter parallel writes across different AGs; I'd like to have higher chanced of keeping ''related'' files near each other in the same AG, not just blocks of the same file near each other. But also because if the free list is fragmented, there will be free blocks of potentially many different sizes in it, and preallocating from the beginning the final size means that the largest, or a large, contiguous block can be reserved (but I see that your compromise below would achieve this, so fine). >> - alternatively, a default minimum extent size=3F So that >> the extents are initially allocated of that size, but >> can be reduced by 'close'(2) or 'ftruncate'(2) to the >> actual size of the file. [ ... ] This is an alternative to whole file preallocation; just preallocate the minimum extent size whenever a write happens, around the address of that write for example. [ ... ] shaggy> I can see preallocation adding more complexity to the shaggy> code. Suppose we preallocate 1M when we first write 4K shaggy> at the beginning of a file. We then seek out to say 256K shaggy> and write another 4K. [ ... alternatives ... ] Well, in this case I'd just zero all the intervening blocks. It would not be, I hope :-), really that hard to do the other two things you mention (ABNR or extent splitting), but I guess that if one sets an option to preallocate I would assume that one wants dense files. I personally reckon that a second per-block bit (written or not, not just allocated or not) might be useful, but probably too late. However conceivably whether a block is allocated is probably recorded redundantly in the fact it is part of an extent and in the bitmap, so one could do some reinterpretation. Might be hairy though... Unless the option is for a minimum extent size, in which case one wants just chunky files (that is files with holes, where however allocated bit and unallocated bits come in much larger chunks that a single block). >> - a maximum extent size=3F For example to ensure that no extent >> larger than 256KiB is ever allocated=3F [ ... ] shaggy> I'm not sure what that would buy us. >> Well, it would prevent really large extents from happening. In >> a buddy system very large allocations (wrt to the size of the >> whole) cause trouble. shaggy> I don't understand this. Just speculating... In general theoretical terms, Knuth in analyzing the buddy system (for RAM allocation though) says that it has good performance, which is a surprise, as it has the two problems of potentially lots of internal fragmentation because of the power-of-two issue, and of free list fragmentation because of the coalesce-only-buddies issue. The good performance happens because in his tests most of the blocks are rather small wrt to the size of the arena. So, well, perhaps I don't know how JFS really allocates stuff, but suppose that one creates a 5GiB file, preallocated or written, and suppose there is a free 8GiB block. 3GiB might get wasted because of the power-of-two issue. Now suppose the maximum extent size allowed is 1GiB. We get 5 1GiB extents allocated, and then 1GiB+2GiB free block. This seems to me a better outcome than one 8GiB extent, and this is a good thing that can't be done with a RAM buddy allocator. There is also a bit of the BSD FFS/'ext3' logic that when writing large files they switch cylinder group every now and then, so that both small and large files in the same directory say can be (begin) nearby in disc distance... This is not per-file locality, but per-(sub)tree locality, which I think often matters too. >> So for example an 11MiB file should ideally be not in one >> 16MiB extent, but in three ones, 8+2+1MiB (or arguably >> 8+4MiB), and now I wonder what happens with JFS -- I shall >> try. shaggy> I can see that having 3 extents is no worse, but I can't shaggy> see why you would want to avoid a larger extent. Again, if it is no worse, why not give the option ''just-in-case''=3F :-) But more seriously for example because in this case one saves 5GiB, if the single larger extent must be 16GiB for an 11GB file. Those 5GiB of internal fragmentation not only waste space, they make the arm travel further over unused data. [ ... ] shaggy> I still don't get the problem of tying up the buddy shaggy> tree. If an extent takes up an entire AG, that's great. shaggy> We've got a better chance of finding contiguous free shaggy> space in other AGs. Yes, if all you care is single-file-performance, and performance just after a fresh load. But suppose one cares also about keeping files that are logically nearby (e.g. in the same directory) nearby on the disc too, and what happens when the free list becomes fragmented. And as long as file bodies are =5Fmostly=5F contiguous, that's fine. >> Conversely, when allocating smaller files, having a minimum >> advisory extent size of say 256KiB can help ensuring >> contiguity even in massive multithreaded writes or on >> fragemented free lists. shaggy> I guess you're not interested in efficient use of free shaggy> space. Well, if the user sets a minimum (fixed or default) extent size, that's a tradeoff they make knowing what they are doing. But as remarked below, I may not have made it too clear that I would have such options for adjusting allocation/preallocation granularity default to 0 or infinity (for min/max granule), so by default allocation would be exactly as it is now. [ ... ] >> * In many important cases files are overwritten with contents >> of much the same size as they were (e.g. recompiling a '.c' >> into a '.o'). So one might as well, tentatively, preserve >> the previously allocated size on a 'ftruncate', and then do >> it for real on a 'close'. shaggy> Interesting. We'd have to be careful about leaving shaggy> stale data in pages that may not be written to. They shaggy> would either have to be zero-filled, or have a hole shaggy> punched into the file. That's why ideally one has a ''max byte written so far'' high water mark, not just the ''max readable' and ''max allocated'' ones. My expectation is that seeking around while writing is actually rather rare... The other classic example is repeated package ('.rpm', '.deb') upgrades. Almost always the upgraded packages has the same files with the same or much the same sizes, just different contents. shaggy> (Does the compiler really truncate an existing file and shaggy> re-write it, or does it completely replace the .o with a shaggy> new file=3F) Most such programs are stupid unfortunately. But modifying them is very easy (compiler, 'tar', 'cp', ...), and even easier and probably almost as good is to modify what 'stdio' does for example with a 'fopen(....,"w")': instead of that becoming an 'open'(2) with 'O=5FCREAT', which deallocates the existing blocks, do it with 'O=5FRDWR', and then 'ftruncate' on 'fclose'(3). One of the scandals of our modern times is that various libcs and kernels dont take advantage of the useful implicit hints in 'fopen'(3)/'open'(2) options, both as to allocation and read/write clustering and access patterns. One could also easily modify the 'open'(2) implementation to make 'O=5FCREAT' equivalent to 'O=5FRDWR' plus resetting the ''max readable'' watermark to zero, and then auto-truncate on 'close'(2)'. Either could be done by 'LD=5FPRELOAD' of a suitable set of wrappers, at least initially. [ ... ] >> * Imagine a 2300GiB JFS filesystem, with a minimum extent >> size of 1GiB and a maximum extent size of say 16GiB (never >> mind the AG limits :->), mounted perhaps with '-o >> nointegrity'. shaggy> Uh, you'd be willing to lose everything if your system shaggy> crashed or lost power=3F If not, you don't want nointegrity. Ahhhh, but all that journaling does in JFS so far is to protect =5Fmetadata=5F transactions. Once the virtual volumes are created, there are no further metadata updates, except perhaps for the inode time fields. Admittedly by the same argument there is not much point in disabling journaling for the ''pool'' JFS filesystem. All the journaling that matters would happen =5Finside=5F the files (virtual volumes), and I would not disable that... >> * Such a filesystem plus '-o loop' (built on 'md' if needed) >> looks to me like a ''for free'' LVM, and with essentially >> the same performance, and with no need for special >> utilities or configuration. shaggy> You're losing me here. I don't think we need a shaggy> filesystem to replace lvm. Yes, but if a filesystem can perform nearly as well a DM/LVM2, having the option to use it like that seems to me to be rather valuable, if only for the sake of minimizing entities. For example, one of the major uses of DM/LVM2 is for Oracle tablespaces, for two reasons: * Tablespaces should be ideally contiguous and low overhead, so partitions are often used (even if some people think this is not necessary). * Many Oracle databases have hundreds of them, and one can only create so many real partitions, and managing them is a pain regardless. It is therefore in this case that DM/LVM2 are used as a crude replacement for JFS. Now consider: instead of creating hundreds of logical volumes with DM/LVM2, just create hundreds of ordinary preallocated files with JFS with high minimum extent size and somewhat higher maximum extent size. Quick and easy and same performance. And I am fairly sure of this: because this article: http://WWW.Oracle.com/technology/oramag/webcolumns/2002/techarticles/= scalzo=5Flinux02.html says that raw tablespaces (under DM/LVM2) are =5Fslower=5F than using 'ext3' files (and JFS does not do too bad). My guess is that because these are obviously (like in most naive benchmarks) freshly loaded filesystems, and 'ext3' achieves optimal layout on freshly loaded data (and JFS almost). So my further guess is that if the layout is good, a file system can beat DM/LVM2 at its own game, because DM/LVM 2 are in effect a crude large-extent large-file filesystem. >> My understanding of the current logic is to allocate small >> extents (the size of a 'write'(2) I guess), use the hint to >> put them near each other and then coalesce them if possible >> (if the hints worked that well). shaggy> Ideally, the size of the extents would be the size of shaggy> the write. In most cases, we are doing allocation a page shaggy> at a time. If there is space immediately after the shaggy> previous extent, that extent is extended to contain the shaggy> new blocks, so there is no coalescing going on. Just nitpicking, but to illustrate my mental model of JFS: that =ABis extended=BB is in effect coalescing, unless the buddy system is really a fiction. Suppose that 2GiB have been just written in the currently open extent, and that these 2GiB are all inside the first of a pair of two 2GiB buddies. Write another byte and you need to allocate the second 2GiB buddy, thus coalescing it with the first one, effectively now the 2GiB+1 extent is contained in a 4GiB buddy. shaggy> It is not as complex as preallocation. Even if we know shaggy> in advance the size of the file, we would have to make shaggy> sure that unwritten pages are zeroed, either by shaggy> physically writing zero to disk, or punching holes in shaggy> the extent (either a real hole, or an ABNR extent). Or use the written-readable-allocate ''sizes''. I would expect most preallocations to be for sequentially written files, so not worry much about holes in the middle. [ ... top-down vs. bottom up allocation ... ] shaggy> Hmm. Maybe there's a compromise. When doing allocations shaggy> for file data, jfs could search the binary buddy tree shaggy> for an extent of a certain size (say 1 MB), but continue shaggy> to allocate as it does. That way a sequentially-written shaggy> file would grow contiguously into that space. Yes, that seems quite a good idea as it would most likely achieve the same effect with minimal code disruption. Now, this is equivalent as the whole AG is locked, so in effect the 1MiB buddy is preallocated. [ ... ] >> The general case for preallocation is made for example in >> this 'ext[23]' paper: >> http://WWW.USENIX.org/events/usenix02/tech/freenix/full=5Fpapers/tso= /tso=5Fhtml/ shaggy> [ ... ] If we allow some explicit mechanism to shaggy> preallocate a large file, I think we would have some shaggy> options. Yes, and there are two ways to do so, in-band and out-of-band; in-band, which is what the 'ext3' guys are thinking mostly about, may require changing APIs or adding extended attributes to files. My ''on-the-cheap'' preference is for out-of-band options, that is either global or per-filesystem. shaggy> Maybe we could implement dense files and use ABNR shaggy> extents in some explicit cases. Again, if we have some shaggy> way to know to begin a file where there is a lot of free shaggy> space, and can lock out other allocations, we should get shaggy> the desired results. Yes, that sound reasonable. ABNR as indicated elsewhere is probably not that needed because I expect more preallocation to be sequential (no holes in the middle) or to be by large granule extents, with holes in the middle of large chunks. [ ... ] shaggy> I guess I'm uncomfortable preallocating all the time, shaggy> since it will lead to more fragmentation. If every shaggy> small file begins at a 1 MB offset, we'll have lots of shaggy> free space in between these small allocations. [ ... ] This is a big misunderstanding, sorry; I did say that I would like either a global or per-filesystem set of options, like: * 'smallest-extent-size', [default 0]: no newly created extent can be smaller than this. * 'default-extent-size' [default 0]: extents are initially created this small, and the last one is truncated on close. * 'largest-extent-size', [default 0 to mean infinity]: no newly created extent can be larger than this. These could be either in '/proc/fs/jfs/' (global), or as mount options (per-filesystem). Then also ideally 'ftruncate' would also support the ''truncate to a larger size than the current one'' semantics (obeying the options above too). >> Then there is the ''DM/LVM2'' replacement story... shaggy> I don't want to go there. :^) Oh no.... :-) But again, suppose that you can create a 2300GiB JFS filesystem over an MD RAID, and within this you can efficiently create (which means, preallocated, mostly contiguous, unwritten) 50-500GiB files in say 1-10GiB extents, say for tablespaces, or virtual machine discs, to be mounted '-o loop'... Sure, one can do that now but there are a couple of annoyances: * To ensure best contiguity all the volumes should be created just after 'jfs=5Fmkfs', when the free list is mostly contiguous and the buddy system wholesome. And this may achieve too much contiguity. * The big files need to be actually =5Fwritten to=5F to achieve allocation. One would either preallocate the large files, or set a minimum mandatory extent size of say 4GiB and not preallocate, and let the filesystem be allocated in 4GiB chunks. It would sound wonderful to me... MD/JFS/'loop' would then do at least 90% of what DM/LVM2 do, for free. It would be 100% if you implemented reverse-copy-on-write (that is, snapshot) files :-). BTW, the Oracle tablespace and VM virtual disc stories are part of my interests, and these are about DM/LVM2 replacement. But I am also interested in the upgrade-the-installed-packages and the archive-of-DVD-images for a video-on-demand server stories, in case that was not obvious. All these would rather benefit from preallocation either whole-file or chunky-extents... |
From: Jeffrey R. B. <jb...@ch...> - 2005-10-30 16:22:12
|
No virus found in this outgoing message. Checked by AVG Free Edition. Version: 7.1.362 / Virus Database: 267.12.6/151 - Release Date: 10/28/2005 |
From: Eric G. <gra...@gm...> - 2005-10-29 22:48:06
|
On 10/27/05, Dave Kleikamp <sh...@au...> wrote: > > On Thu, 2005-10-27 at 17:11 -0400, Eric Gharakhanian wrote: > > > > > I have tried multiple versions of jfsutils (including 1.1.8) using > > various live cds and also by compiling diffrent versions on my server > > box to no avail. Whenever you have the time, any help you give will > > be appreciated. > > Okay. I just asked about older versions since I noticed you were > running 1.1.10, which is brand new, and I suspected something new. > > The addresses fsck is attempting to read from look reasonable, based on > the number of blocks in the file system as reported. > > I haven't seen a problem like yours. What kind of hardware is this > running on? There are no messages to dmesg when fsck is run, are there? > > -- > David Kleikamp > IBM Linux Technology Center About hardware, the hard drive is a Western digital WD1600 which is around 18 months old. It is connected to a highpoint hpt100 raid controller (intigrated into the motherboard). However hardware raid has been disabled so it is basically functioning as a standard ata controller. Also on this controller are 2 western digital WD800 drives, which are running software raid 0. The WD1600 and one of the WD800's are sharing an ide channel. The WD1600 (our problem drive) is the master. I have tried running fsck with th= e drive connected to a diffrent motherboard and using diiffrent cables with basically the same results. I will check about messages to dmesg tomorrow. |
From: Chris S. <jf...@ha...> - 2005-10-29 18:18:08
|
Hi, When a new hard link is created inside of a directory, that directory's mtime is not updated as it would be in most other circumstances. This behavior exists in at least the latest Linux kernel, 2.6.14. Thanks, Chris |
From: evilninja <evi...@gm...> - 2005-10-29 01:00:29
|
J. Peters schrieb: > Did Red Hat give any reason for dropping JFS from Enterprise Linux version 4? the sort of question comes up every now and then, same on linux-xfs ;) here's and old post: http://www.mail-archive.com/jfs...@li.../msg00263.html https://bugzilla.novell.com/show_bug.cgi?id=115227 > It is rolled into Fedora Core 4. have they adopted a popularity package yet, as the debian folks do? so one (here: redhat) could find out what packages (filesystems?) are needed by its she^W^W^Wusers ;-) -> http://popcon.debian.org/ Christian. -- BOFH excuse #156: Zombie processes haunting the computer |
From: Dave K. <sh...@au...> - 2005-10-27 21:26:53
|
On Thu, 2005-10-27 at 17:11 -0400, Eric Gharakhanian wrote: > > I have tried multiple versions of jfsutils (including 1.1.8) using > various live cds and also by compiling diffrent versions on my server > box to no avail. Whenever you have the time, any help you give will > be appreciated. Okay. I just asked about older versions since I noticed you were running 1.1.10, which is brand new, and I suspected something new. The addresses fsck is attempting to read from look reasonable, based on the number of blocks in the file system as reported. I haven't seen a problem like yours. What kind of hardware is this running on? There are no messages to dmesg when fsck is run, are there? -- David Kleikamp IBM Linux Technology Center |
From: Eric G. <gra...@gm...> - 2005-10-27 21:11:28
|
On 10/26/05, Dave Kleikamp <sh...@au...> wrote: > > On Wed, 2005-10-26 at 17:41 -0400, Eric Gharakhanian wrote: > > I am running gentoo on my little home server, and for the first time > > ever the box locked up (i beleive do to a loose pci card that was not > > screwed into place, DOH!). Anyhow, one hard drive on this box is > > a 160gb drive which has a roughly 150gb jfs partition. This > > partition holds home videos, back ups of dvds and many pictures from > > my digital camera. Upon rebooting i could not mount that partition > > > > mount: wrong fs type, bad option, bad superblock > > on /dev/hdh4 > > > > next i ran an # fsck.jfs -v /dev/hdh4 > > > > fsck.jfs version 1.1.10, 19-Oct-2005 > > processing started: 10/25/2005 6.1.34 > > Using default parameter: -p > > The current device is: /dev/hdh4 > > Open(...READ/WRITE EXCLUSIVE...) returned rc =3D 0 > > Primary superblock is valid. > > The type of file system for the device is JFS. > > Block size in bytes: 4096 > > Filesystem size in blocks: 37636278 > > **Phase 0 - Replay Journal Log > > ujfs_rw_diskblocks: read 0 of 2116 bytes at offset 154124644352 > > LOGREDO: Unable to read Journal Log superblock. > > logredo failed (rc=3D-260). fsck continuing. > > **Phase 1 - Check Blocks, Files/Directories, and Directory Entries > > ujfs_rw_diskblocks: read 0 of 4096 bytes at offset 40802189312 > > Unrecoverable error reading M from /dev/hdh4. CANNOT CONTINUE. > > Fatal error (-10021,30) accessing the filesystem > > (1,40802189312,4096,0). > > processing terminated: 10/25/2005 6:02:28 with return code: -10021 > > exit code: 8. > > > > I can mount the drive read only, but when i try to access many > > dirrectories as root, bash returns saying Permission denied. > > > > Any help recovering my data would be greatly appreciated. > > I don't have time to look at this right now. Would you have a live CD > you could boot from? You may try to run an older version of jfs_fsck > against the partition to see if it will replay the journal correctly. > If that works, revert to jfsutils-1.1.8 until I can figure out what > happened here. I have tried multiple versions of jfsutils (including 1.1.8) using various live cds and also by compiling diffrent versions on my server box to no avail. Whenever you have the time, any help you give will be appreciated. |
From: Dave K. <sh...@au...> - 2005-10-26 22:03:36
|
On Wed, 2005-10-26 at 17:41 -0400, Eric Gharakhanian wrote: > I am running gentoo on my little home server, and for the first time > ever the box locked up (i beleive do to a loose pci card that was not > screwed into place, DOH!). Anyhow, one hard drive on this box is > a 160gb drive which has a roughly 150gb jfs partition. This > partition holds home videos, back ups of dvds and many pictures from > my digital camera. Upon rebooting i could not mount that partition > > mount: wrong fs type, bad option, bad superblock > on /dev/hdh4 > > next i ran an # fsck.jfs -v /dev/hdh4 > > fsck.jfs version 1.1.10, 19-Oct-2005 > processing started: 10/25/2005 6.1.34 > Using default parameter: -p > The current device is: /dev/hdh4 > Open(...READ/WRITE EXCLUSIVE...) returned rc = 0 > Primary superblock is valid. > The type of file system for the device is JFS. > Block size in bytes: 4096 > Filesystem size in blocks: 37636278 > **Phase 0 - Replay Journal Log > ujfs_rw_diskblocks: read 0 of 2116 bytes at offset 154124644352 > LOGREDO: Unable to read Journal Log superblock. > logredo failed (rc=-260). fsck continuing. > **Phase 1 - Check Blocks, Files/Directories, and Directory Entries > ujfs_rw_diskblocks: read 0 of 4096 bytes at offset 40802189312 > Unrecoverable error reading M from /dev/hdh4. CANNOT CONTINUE. > Fatal error (-10021,30) accessing the filesystem > (1,40802189312,4096,0). > processing terminated: 10/25/2005 6:02:28 with return code: -10021 > exit code: 8. > > I can mount the drive read only, but when i try to access many > dirrectories as root, bash returns saying Permission denied. > > Any help recovering my data would be greatly appreciated. I don't have time to look at this right now. Would you have a live CD you could boot from? You may try to run an older version of jfs_fsck against the partition to see if it will replay the journal correctly. If that works, revert to jfsutils-1.1.8 until I can figure out what happened here. -- David Kleikamp IBM Linux Technology Center |
From: J. P. <jap...@ma...> - 2005-10-26 22:01:02
|
Did Red Hat give any reason for dropping JFS from Enterprise Linux version = 4? It is rolled into Fedora Core 4. --=20 ___________________________________________________ Play 100s of games for FREE! http://games.mail.com/ |