Am Dienstag, den 07.03.2006, 10:26 +0900 schrieb FUJITA Tomonori:
> From: Arne Redlich <arne.redlich@...>
> Subject: Re: [Iscsitarget-devel] [PATCH] propagate IET I/O errors to initiator
> Date: Sun, 05 Mar 2006 13:57:55 +0100
> > > > I don't think that we will implement the complete error handling. So
> > > > can we do this in a simple way? I've attached changes to
> > > > send_data_rsp(). I think that you can do similar changes to
> > > > send_scsi_rsp().
> > >
> > > Agreed, that's also why I didn't use scsi_sense_hdr right from the
> > > start. Admittedly, I like your idea even better.
> > > I'm gonna rework and repost my patch based on your suggestion later on.
> > Sorry for the long delay, but I've been rather busy working on non-IET
> > stuff for the last weeks.
> No problem at all.
> > Below's a revised version of the patch based on your suggestion.
> Looks nice. I think that we need to see the result of
> sync_page_range() too.
> Index: kernel/file-io.c
> --- kernel/file-io.c (revision 28)
> +++ kernel/file-io.c (working copy)
> @@ -74,14 +74,11 @@
> struct fileio_data *p = (struct fileio_data *) lu->private;
> struct inode *inode;
> loff_t ppos = (loff_t) tio->idx << PAGE_CACHE_SHIFT;
> - ssize_t res;
> inode = p->filp->f_dentry->d_inode;
> - res = sync_page_range(inode, inode->i_mapping, ppos, (size_t) tio->size);
> - return 0;
> + return sync_page_range(inode, inode->i_mapping, ppos, (size_t) tio->size);
> static int open_path(struct iet_volume *volume, const char *path)
Oh yes, you're right, I must have overlooked this one.
> > Please note that it's only compile-tested.
> I'm happy to merge this patch if someone confirms that it works with
> kinda artificial error tests.
Yes, it would be really nice if someone could test this, I probably
won't find the time to do so soon.
I think this error path can be triggered by creating a MD RAID, putting
LVM volumes on top of it, exporting such a volume via IET and then
destroying the RAID by removing more drives than it can tolerate to lose
(e.g. removing 2 drives from a RAID 5) - the LVM devices will still be
available but I/O to them will fail, of course. At least that was the
error condition here that made me aware of the problem.
Another idea that jumps to my mind would be using md_faulty or the DM