You can subscribe to this list here.
2000 |
Jan
(2) |
Feb
(15) |
Mar
(1) |
Apr
(11) |
May
(9) |
Jun
(22) |
Jul
(23) |
Aug
(21) |
Sep
(21) |
Oct
(7) |
Nov
(13) |
Dec
(58) |
---|---|---|---|---|---|---|---|---|---|---|---|---|
2001 |
Jan
(20) |
Feb
(33) |
Mar
(24) |
Apr
(27) |
May
(48) |
Jun
(12) |
Jul
(35) |
Aug
(37) |
Sep
(41) |
Oct
(37) |
Nov
(29) |
Dec
(4) |
2002 |
Jan
(35) |
Feb
(17) |
Mar
(33) |
Apr
(65) |
May
(53) |
Jun
(43) |
Jul
(38) |
Aug
(37) |
Sep
(11) |
Oct
(25) |
Nov
(26) |
Dec
(38) |
2003 |
Jan
(44) |
Feb
(58) |
Mar
(16) |
Apr
(15) |
May
(11) |
Jun
(5) |
Jul
(70) |
Aug
(3) |
Sep
(25) |
Oct
(8) |
Nov
(16) |
Dec
(15) |
2004 |
Jan
(16) |
Feb
(27) |
Mar
(21) |
Apr
(23) |
May
(14) |
Jun
(16) |
Jul
(5) |
Aug
(5) |
Sep
(7) |
Oct
(17) |
Nov
(15) |
Dec
(44) |
2005 |
Jan
(37) |
Feb
(3) |
Mar
(7) |
Apr
(13) |
May
(14) |
Jun
(23) |
Jul
(7) |
Aug
(7) |
Sep
(12) |
Oct
(11) |
Nov
(11) |
Dec
(9) |
2006 |
Jan
(17) |
Feb
(8) |
Mar
(6) |
Apr
(14) |
May
(18) |
Jun
(16) |
Jul
(6) |
Aug
(1) |
Sep
(5) |
Oct
(12) |
Nov
(1) |
Dec
(1) |
2007 |
Jan
(3) |
Feb
(6) |
Mar
(6) |
Apr
|
May
|
Jun
(7) |
Jul
(8) |
Aug
(5) |
Sep
(4) |
Oct
|
Nov
(8) |
Dec
(14) |
2008 |
Jan
(31) |
Feb
(3) |
Mar
(9) |
Apr
|
May
(15) |
Jun
(9) |
Jul
|
Aug
(13) |
Sep
(10) |
Oct
|
Nov
|
Dec
|
2009 |
Jan
(11) |
Feb
|
Mar
|
Apr
|
May
|
Jun
(9) |
Jul
(23) |
Aug
|
Sep
|
Oct
|
Nov
|
Dec
|
2010 |
Jan
|
Feb
|
Mar
|
Apr
|
May
|
Jun
(3) |
Jul
|
Aug
|
Sep
|
Oct
|
Nov
(1) |
Dec
(10) |
2011 |
Jan
(2) |
Feb
|
Mar
|
Apr
|
May
|
Jun
|
Jul
|
Aug
(3) |
Sep
|
Oct
|
Nov
|
Dec
|
2012 |
Jan
(5) |
Feb
(3) |
Mar
(2) |
Apr
|
May
|
Jun
(1) |
Jul
(1) |
Aug
|
Sep
|
Oct
|
Nov
(1) |
Dec
|
2013 |
Jan
|
Feb
|
Mar
(2) |
Apr
|
May
|
Jun
|
Jul
(5) |
Aug
(1) |
Sep
|
Oct
|
Nov
|
Dec
|
2014 |
Jan
|
Feb
|
Mar
|
Apr
|
May
|
Jun
|
Jul
(2) |
Aug
|
Sep
|
Oct
|
Nov
|
Dec
|
2015 |
Jan
|
Feb
|
Mar
(1) |
Apr
|
May
|
Jun
|
Jul
(1) |
Aug
|
Sep
|
Oct
|
Nov
|
Dec
|
2016 |
Jan
|
Feb
|
Mar
|
Apr
(1) |
May
(1) |
Jun
|
Jul
|
Aug
|
Sep
|
Oct
|
Nov
|
Dec
|
2020 |
Jan
|
Feb
|
Mar
|
Apr
|
May
|
Jun
(1) |
Jul
|
Aug
|
Sep
|
Oct
|
Nov
|
Dec
|
2022 |
Jan
|
Feb
|
Mar
|
Apr
|
May
|
Jun
|
Jul
|
Aug
|
Sep
|
Oct
|
Nov
|
Dec
(2) |
From: Tony N. <ton...@ge...> - 2009-06-25 03:50:46
|
On 09-06-24 19:20:19, Kenneth Porter wrote: > [It probably makes sense to have messages to dump-announce get echoed > to dump-users, for those subscribed only to the latter.] > > A source RPM for the various Linux distros derived from Red Hat is > available here: > > <http://download.fedora.redhat.com/pub/fedora/linux/development/ > source/SRPMS/dump-0.4-0.1.b42.fc12.src.rpm> > > Thanks to Adam Tkac of Red Hat for the rapid packaging. I've take a brief look at this, and IIUC, to get EXT4 support one must also install a current e2fsprogs-libs package. I don't see any changes that would support EXT4, but the CHANGES file mentions: 18. Add (preliminary) ext4 support - thanks to libext2fs which does all the job for us. Thanks to Gertjan van Wingerde <gwi...@gm...> for the patch. Note that, in line with current RH packaging, the build is no longer static, so the installed version of any libraries will be used by dump and restore. Also, as I'm still on F9, I got the usual error from rpmbuild ("error: unpacking of archive failed on file /home/tonyn/rpmbuild/SOURCES/ dump-0.4b42.tar.gz;4a42cc95: cpio: MD5 sum mismatch") and had to manually unpack the SRPM with: rpm2cpio dump-0.4-0.1.b42.fc12.src.rpm | cpio -i and then move the files where I wanted them (in the rpmbuild directory structure), and untar the source. -- ____________________________________________________________________ TonyN.:' <mailto:ton...@ge...> ' <http://www.georgeanelson.com/> |
From: Kenneth P. <sh...@se...> - 2009-06-24 23:36:02
|
[It probably makes sense to have messages to dump-announce get echoed to dump-users, for those subscribed only to the latter.] A source RPM for the various Linux distros derived from Red Hat is available here: <http://download.fedora.redhat.com/pub/fedora/linux/development/source/SRPMS/dump-0.4-0.1.b42.fc12.src.rpm> Thanks to Adam Tkac of Red Hat for the rapid packaging. |
From: Kenneth P. <sh...@se...> - 2009-06-24 23:35:59
|
I googled for "encrypted ext3" and came across this article that shows how to create an encrypted block device on which you put the filesystem of your choice. <http://www.linuxjournal.com/article/7743> More info on the underlying encryption system: <http://en.wikipedia.org/wiki/Dm-crypt> |
From: MargoAndTodd <mar...@gm...> - 2009-06-24 16:38:46
|
Hi All, I have a series of removable disks that I use to put my dumps on. If one of these disks gets stolen, it will be a serious problem. Anyone have any advice as to how to make the data useless to any potential thief? Encryption, passwords, etc..? Many thanks, -T |
From: MargoAndTodd <mar...@gm...> - 2009-06-24 16:38:37
|
Hi Stelian, Any new on the Ext4 support front? Many thanks, -T |
From: MargoAndTodd <mar...@gm...> - 2009-06-24 16:34:01
|
Hi All, If anyone wants to follow Red Hat Enterprise Linux's bug report on Ext4 and Dump, you can sign up for it at: https://bugzilla.redhat.com/show_bug.cgi?id=444534 -T |
From: Peter M. <pm...@fr...> - 2009-01-24 23:03:26
|
On Sat, 24 Jan 2009, Peter Münster wrote: > I attach a script, that uses this principle. Sorry, this time I really attach it... Cheers, Peter -- http://pmrb.free.fr/contact/ |
From: Peter M. <pm...@fr...> - 2009-01-24 22:43:12
|
On Fri, Jan 16 2009, Stelian Pop wrote: > It does look simpler but the advantage of the tower of Hanoi way is that > it minimizes the number of tapes you need to restore in case of data > failure. Hello, Another strategy is, to minimize the occupied space of the backups: if dump-(n) smaller than dump-(n+1) then do the dump of level n and delete all higher levels else try the same with n <- n + 1 (until n = 9) I attach a script, that uses this principle. Cheers, Peter -- http://pmrb.free.fr/contact/ |
From: Stelian P. <st...@po...> - 2009-01-16 16:47:08
|
On Mon, Jan 12, 2009 at 08:14:43PM -0500, Bruce Hyatt wrote: > Keith G. Robertson-Turner wrote: > > >Here's what you should be doing (IMHO): > > > > Day | Level | Set > > ------------------------- > > 1 | 0 | 1 > > 2 | 1 | 1 > Etc. > > Thanks, Keith. > > I'll try this. It looks simpler than the "tower of Hanoi" at least. It does look simpler but the advantage of the tower of Hanoi way is that it minimizes the number of tapes you need to restore in case of data failure. Imagine you do a level 0 dump on Sunday, 1 on Monday, 2 on Tuesday etc, and that the failure occurs on Saturday. In this case you will need to restore the level 0, 1, 2, 3, 4, 5, 6 tapes (Friday being the last known good tape - number 6). If you use the tower of Hanoi sequence you would have to restore only 3 or 4 tapes at the most. Stelian. -- Stelian Pop <st...@po...> |
From: Eric J. <eje...@sw...> - 2009-01-13 03:50:36
|
Hi Bruce, On Jan 12, 2009, at 8:14 PM, Bruce Hyatt wrote: > I'll try this. It looks simpler than the "tower of Hanoi" at least. I > was hoping I could make a full backup and then run 1 simple job every > day that would backup all files changed since the last backup. Now I > have to dive into regex and scripts. Oh well, had to happen sooner or > later 8-} You could make life a little simpler by just hard-coding 8 lines into your crontab - just specify levels 1-7 for the seven days of the week (e.g. level 1 every Sunday, 2 every Monday, etc.) and then a level 0 from time to time, say, on the first day of a month. I guess you'd need to be careful that your level 0 didn't run at the same time as the other level dump scheduled for whatever day of the week the 1st happened to fall on. If you wanted to be even more careful, and not overwrite the previous level 1 dump with the next one, you could use unique filenames incorporating the date and time, e.g. a dump command like this: dump -1uaf dump_level1_`date "+%m-%d-%y"` / Not quite as sophisticated, but pretty simple to implement with cut- and-paste ;-) Eric |
From: Bruce H. <bru...@gm...> - 2009-01-13 01:14:49
|
Keith G. Robertson-Turner wrote: >Here's what you should be doing (IMHO): > > Day | Level | Set > ------------------------- > 1 | 0 | 1 > 2 | 1 | 1 Etc. Thanks, Keith. I'll try this. It looks simpler than the "tower of Hanoi" at least. I was hoping I could make a full backup and then run 1 simple job every day that would backup all files changed since the last backup. Now I have to dive into regex and scripts. Oh well, had to happen sooner or later 8-} Sincerely, Bruce Hyatt |
From: Keith G. Robertson-T. <dum...@sl...> - 2009-01-12 11:11:25
|
Verily I say unto thee, that Keith G. Robertson-Turner spake thusly: I shouldn't write these scripts late at night :) I just realised that "next_set" will be undefined (below) if "level" is anything other than 6. > #!/bin/bash > > # /mnt/backup must be available and defined in /etc/fstab > mount /mnt/backup > > mypc=$(hostname) > backup_path="/mnt/backup/${mypc}/sets" > > mkdir -p "${backup_path}"/{1,2}/dumps/{0,1,2,3,4,5,6} > > if [ ! -e "${backup_path}/current" ] > then > echo 1 > "${backup_path}/current" > fi > > level=$(expr $(date +%u) - 1) > > read current_set < "${backup_path}/current" > if [ $level -eq 6 ] > then > if [ $current_set -eq 1 ] > then > next_set=2 > else > next_set=1 > fi ### correction else next_set=$current_set > fi > > uid=$(date +%Y%m%d) > > dump -${level}u -z \ > -f "${backup_path}/${current_set}/dumps/${level}/${mypc}-${uid}.dump" / > > rm -f "${backup_path}/current" > echo $next_set > "${backup_path}/current" > > umount /mnt/backup > > exit 0 > ### It could probably do with some more error checking too. -- Regards, Keith G. Robertson-Turner |
From: Keith G. Robertson-T. <dum...@sl...> - 2009-01-12 02:40:26
|
Verily I say unto thee, that Bruce Hyatt spake thusly: > I have read the dump man-page and searched the mailing-list archives > but I'm still not quite certain about doing incremental backups. > > Assume I do a level 0 backup on Jan 10 and then do a level 1 backup > every day after. You can do that, but that is not an incremental backup, it's what's called a differential backup. Put simply: Full Backup = All files. Differential = All files that have changed since the last /full/ backup. Incremental = All files that have changed since the last differential or incremental backup. Level 0 is always a full backup. Level 1 is always a differential backup. Level <n> is an incremental backup /if/ there are any other backups between 0 and <n>, or a differential backup otherwise. Advantages and disadvantages: A backup policy which utilises only differentials takes longer (overall) to create, but less time to restore. A backup policy which utilises incrementals takes less time (overall) to create, but more time to restore. Here's what you should be doing (IMHO): Day | Level | Set ------------------------- 1 | 0 | 1 2 | 1 | 1 3 | 2 | 1 4 | 3 | 1 5 | 4 | 1 6 | 5 | 1 7 | 6 | 1 8 | 0 | 2 9 | 1 | 2 10 | 2 | 2 11 | 3 | 2 12 | 4 | 2 13 | 5 | 2 14 | 6 | 2 Repeat. E.g. have your backup directories as follows: (In this example, a system with the hostname "venus") /mnt/backup/venus/sets/1/dumps/0 /mnt/backup/venus/sets/1/dumps/1 /mnt/backup/venus/sets/1/dumps/2 /mnt/backup/venus/sets/1/dumps/3 /mnt/backup/venus/sets/1/dumps/4 /mnt/backup/venus/sets/1/dumps/5 /mnt/backup/venus/sets/1/dumps/6 /mnt/backup/venus/sets/2/dumps/0 /mnt/backup/venus/sets/2/dumps/1 /mnt/backup/venus/sets/2/dumps/2 /mnt/backup/venus/sets/2/dumps/3 /mnt/backup/venus/sets/2/dumps/4 /mnt/backup/venus/sets/2/dumps/5 /mnt/backup/venus/sets/2/dumps/6 Then backup using the following: #!/bin/bash # /mnt/backup must be available and defined in /etc/fstab mount /mnt/backup mypc=$(hostname) backup_path="/mnt/backup/${mypc}/sets" mkdir -p "${backup_path}"/{1,2}/dumps/{0,1,2,3,4,5,6} if [ ! -e "${backup_path}/current" ] then echo 1 > "${backup_path}/current" fi level=$(expr $(date +%u) - 1) read current_set < "${backup_path}/current" if [ $level -eq 6 ] then if [ $current_set -eq 1 ] then next_set=2 else next_set=1 fi fi uid=$(date +%Y%m%d) dump -${level}u -z \ -f "${backup_path}/${current_set}/dumps/${level}/${mypc}-${uid}.dump" / rm -f "${backup_path}/current" echo $next_set > "${backup_path}/current" umount /mnt/backup exit 0 ### I recommend using LVM snapshots to ensure your dumps are consistent. Save the above as "/usr/local/bin/backup", then chown root:root "/usr/local/bin/backup", then chmod 700 "/usr/local/bin/backup". Add a crontab entry to run e.g. at 3AM every morning. The reason you want two weeks worth of backups is - if you were to continually overwrite the level 0 backup, then it failed, and the filesystem subsequently became corrupted, then you'd have no usable backups to restore from. > Then assume I add filea on Jan 11 and fileb on Jan 12. The way I read > the man page, the Jan 12 dump will include filea even though it is > included in the Jan 11 dump. Is that correct? Yes, because you'd be doing a differential backup. If instead of doing 0, 1, 1, 1, etc.; you do 0, 1, 2, 3, etc.; then that won't happen (unless filea somehow changes between one incremental and the next). > Also, the way I read the man page, I cannot dump fat32 files Correct. Dump is for ext2/3 filesystems only. > but I can dump to a fat32 partition You can dump /to/ any filesystem that is writable under Linux, since you are essentially just creating a file. However, take care that you don't exceed any given filesystem's file size limit (e.g. FAT32 = 4GB). > and restore ext3 files from it. Is THAT correct? Yes. -- Regards, Keith G. Robertson-Turner |
From: Bruce H. <bru...@gm...> - 2009-01-11 23:32:16
|
I have read the dump man-page and searched the mailing-list archives but I'm still not quite certain about doing incremental backups. Assume I do a level 0 backup on Jan 10 and then do a level 1 backup every day after. Then assume I add filea on Jan 11 and fileb on Jan 12. The way I read the man page, the Jan 12 dump will include filea even though it is included in the Jan 11 dump. Is that correct? Also, the way I read the man page, I cannot dump fat32 files but I can dump to a fat32 partition and restore ext3 files from it. Is THAT correct? TIA, Bruce |
From: Kenneth P. <sh...@se...> - 2009-01-08 01:08:42
|
On Tuesday, January 06, 2009 10:49 PM +0100 Stelian Pop <st...@po...> wrote: > I'm afraid not. Some early thoughts on the matter a few months ago > seemed to imply that it wasn't going to be easy, but I don't think > anybody looked deeper than that. I looked back and found this brief mention of issues in the archives: <http://sourceforge.net/mailarchive/forum.php?thread_name=18594.63193.515417.994842%40diehard.x&forum_name=dump-devel> |
From: Stelian P. <st...@po...> - 2009-01-06 22:11:32
|
On Mon, Jan 05, 2009 at 09:52:38AM +0100, Helmut Jarausch wrote: > with the advent of Linux 2.6.28 the new ext4 filesystem has been marked > stable and is going to replace the ext3 filesystem. > > Has anybody checked if it's possible to adapt dump/restore to this new > filesystem? I'm afraid not. Some early thoughts on the matter a few months ago seemed to imply that it wasn't going to be easy, but I don't think anybody looked deeper than that. Stelian. -- Stelian Pop <st...@po...> |
From: Helmut J. <jar...@ig...> - 2009-01-05 08:52:47
|
Hi, with the advent of Linux 2.6.28 the new ext4 filesystem has been marked stable and is going to replace the ext3 filesystem. Has anybody checked if it's possible to adapt dump/restore to this new filesystem? Thanks, Helmut. -- Helmut Jarausch Lehrstuhl fuer Numerische Mathematik RWTH - Aachen University D 52056 Aachen, Germany |
From: <phu...@wi...> - 2008-09-22 04:45:10
|
> I dug out the original tape drive that made the tapes. Even > though it was supposedly the same exact model (Seagate Scorpion > 80), it read the tape without a problem, when the other drive > just gave up on that one specific tape from the set over and > over. > > Tape drives are such peculiar animals! I suspect the heads or tape guides on one drive or the other are slightly out of proper alignment. |
From: <br...@ke...> - 2008-09-21 13:21:28
|
On Fri, Sep 19, 2008 at 12:30:34AM -0700, Kenneth Porter wrote: > --On Thursday, September 18, 2008 2:48 PM -0700 Steve Bonds > <fb7...@sn...> wrote: > > > (Note that dd_rescue doesn't use if= or of= and relies on the > > parameter position to identify the input and output. Best to avoid > > getting them mixed up.) > > When disaster strikes, the first step should be to write-protect your > backup media. I've been doing this since I used floppies on PCs. It's too > easy to make mistakes under the stress of recovery, when suffering from the > drunkenness of adrenalin. I always write protect the tapes after I write them...but I forgot to write back and tell of the happy ending though: I dug out the original tape drive that made the tapes. Even though it was supposedly the same exact model (Seagate Scorpion 80), it read the tape without a problem, when the other drive just gave up on that one specific tape from the set over and over. Tape drives are such peculiar animals! -- + Brent A. Busby + "We've all heard that a million monkeys + UNIX Systems Admin + banging on a million typewriters will + University of Chicago + eventually reproduce the entire works of + Physical Sciences Div. + Shakespeare. Now, thanks to the Internet, + James Franck Institute + we know this is not true." -Robert Wilensky |
From: Kenneth P. <sh...@se...> - 2008-09-19 00:30:50
|
--On Thursday, September 18, 2008 2:48 PM -0700 Steve Bonds <fb7...@sn...> wrote: > (Note that dd_rescue doesn't use if= or of= and relies on the > parameter position to identify the input and output. Best to avoid > getting them mixed up.) When disaster strikes, the first step should be to write-protect your backup media. I've been doing this since I used floppies on PCs. It's too easy to make mistakes under the stress of recovery, when suffering from the drunkenness of adrenalin. |
From: Steve B. <fb7...@sn...> - 2008-09-18 14:48:43
|
If a tape is nearly unreadable (or completely unreadable in parts) restore may choke even with -y. Another option might be to find enough temp space on disk to hold an image of that tape and use a data recovery tool like "dd_rescue" (http://www.garloff.de/kurt/linux/ddrescue) to pull what you can off the tape to a disk image file. It will fill in the areas its unable to read on the source tape with zeroes in the image file. If this is within file data, the recovered files will be filled with unexpected zeroes. If it's within the metadata of the dump itself, you may still have problems restoring (e.g. restore coredumps, etc.) For example: # dd_rescue -A /dev/nst0 /big_disk/bad_tape.img (Note that dd_rescue doesn't use if= or of= and relies on the parameter position to identify the input and output. Best to avoid getting them mixed up.) -- Steve |
From: Kenneth P. <sh...@se...> - 2008-09-08 05:52:11
|
--On Saturday, September 06, 2008 10:50 PM +0200 Stelian Pop <st...@po...> wrote: > What do you mean by "resume" ? Did restore crashed, or stopped with some > error, and you want to launch it again and compare _the rest_ of the > data ? I'm running it from cron and see this in the log mailed to me: decompression error, block 31241281: data error File decompression error while restoring ./srv/dav/prealign/data2.zip continue? [yn] Dump date: Sun Sep 7 01:17:06 2008 > Normaly, if the bad block is inside an inode data, restore should signal > the error, and ask you if you want to continue (unless you did specify > -y, in which case it does this automatically). I wasn't using -y. Will that retry the block? > If the bad block happens to be at the place some important metadata was > saved, well, I guess bad things can happen... I suspect the block is fine and there's some glitch in the USB HD driver. (I'm also asking on the CentOS list to see if anyone's heard of such an issue. The failing block number is very suspicious.) |
From: Stelian P. <st...@po...> - 2008-09-06 20:50:36
|
Le vendredi 05 septembre 2008 à 16:19 -0700, Kenneth Porter a écrit : > Is there a way to resume "restore -C" after a transient hardware read > failure? What do you mean by "resume" ? Did restore crashed, or stopped with some error, and you want to launch it again and compare _the rest_ of the data ? Normaly, if the bad block is inside an inode data, restore should signal the error, and ask you if you want to continue (unless you did specify -y, in which case it does this automatically). If the bad block happens to be at the place some important metadata was saved, well, I guess bad things can happen... Stelian. -- Stelian Pop <st...@po...> |
From: Stelian P. <st...@po...> - 2008-09-06 20:46:59
|
Le vendredi 05 septembre 2008 à 23:27 -0500, Brent Busby a écrit : > I have been restoring a set of ten tapes, and the last one has proven to > be in horrible shape. Any tips on making a 'restore -R' keep going even > if it wants to die? I am sure many files could be read if it weren't > for the tendency it has to "lose sync" and give up. (Just using '-y' > doesn't make it determined enough to stop quitting...) -y is supposed to let restore try to continue. If it is not able to, it means that either it is not possible or it is not implemented... -- Stelian Pop <st...@po...> |
From: Szabolcs S. <sz...@nt...> - 2008-09-06 10:25:12
|
Kenneth Porter <shiva <at> sewingwitch.com> writes: > I'm backing up to an external USB NTFS drive, using compression. The > resulting backup is about 150 gigabytes (after compression). After the > backup I do a verify with "restore -C". For the last two full backups, I > get a read error at sector 268435455 which turns out to be 0xFFFFFFF. I'm > guessing this might be an issue either in the ntfs-3g FUSE filesystem or > the underlying USB2 subsystem. It's a very typical hardware/kernel driver problem with certain disk drives. There is absolutely nothing ntfs-3g could do about it because the error is at a lower, hardware level. There are more info at http://ntfs-3g.org/support.html#ioerror Regrads, Szaka -- NTFS-3G: http://ntfs-3g.org > I'm able to read the sector fine with dd: > > dd if=/dev/sdb of=/tmp/my.sector bs=512 count=10 skip=268435453 > > Sep 5 15:41:37 segw2 kernel: sd 5:0:0:0: Device not ready: <6>: Current: sense key: Not Ready > Sep 5 15:41:37 segw2 ntfs-3g[22860]: ntfs_attr_pread: ntfs_pread failed: Input/output error > Sep 5 15:41:37 segw2 kernel: Add. Sense: Logical unit not ready, cause not reportable > Sep 5 15:41:37 segw2 kernel: > Sep 5 15:41:37 segw2 kernel: end_request: I/O error, dev sdb, sector 268435455 > Sep 5 15:41:37 segw2 ntfs-3g[22860]: ntfs_attr_pread error reading '/Shelob/0/root/dump' at offset 132989304832: 4096 <> -1: Input/ > Sep 5 15:41:37 segw2 kernel: Buffer I/O error on device sdb1, logical block 33554424 |