From: Billy G. <bg...@ch...> - 2004-08-20 22:51:19
|
From: Anonymous <ne...@ho...> - 2004-12-15 17:04:18
|
first - let's state the fact that i'm a total newbie. and then i would like to know if it's possible to make an iSCSI target out of ISO9660 (cd-rom image file) using /dev/loop or any other method. tried a few things with 0.3.7 and 0.3.8 using MS Initiator 1.06 (302) but it never worked. and now i'm just asking if i did something wrong or it's not possible that way at all and is a limitation of the particular initiator/target realisation. i used debian woody with vanilla 2.4.28 and 0.3.7 compilled from source, also the very same initiator from MS. 10 GB volume over 2x100 mbit bonding (4 PCs running Windows 2000 Pro were involved in testing) worked fine untill i tried to run Raxco PerfectDisk to defrag that NTFS formatted LVM volume/iSCSI target while it was red by clients ;) caused non-critical data corruption.:) not sure if it's true about any other defragmentation tools or your late 0.3.8 and 0.4.0 releases. perhaps i missed something important over ietd.conf that caused such a situation. i'm a newbie, remember? anyhow, GREAT project, guys. but it would be even more nice if you'd add some how-tos. |
From: K C. <tec...@ya...> - 2004-12-15 17:14:21
|
--- Anonymous <ne...@ho...> wrote: > first - let's state the fact that i'm a total > newbie. > > and then i would like to know if it's possible to > make an iSCSI target out > of ISO9660 (cd-rom image file) using /dev/loop or > any other method. tried a > few things with 0.3.7 and 0.3.8 using MS Initiator > 1.06 (302) but it never > worked. and now i'm just asking if i did something > wrong or it's not > possible that way at all and is a limitation of the > particular > initiator/target realisation. > > i used debian woody with vanilla 2.4.28 and 0.3.7 > compilled from source, > also the very same initiator from MS. 10 GB volume > over 2x100 mbit bonding > (4 PCs running Windows 2000 Pro were involved in > testing) worked fine untill > i tried to run Raxco PerfectDisk to defrag that NTFS > formatted LVM > volume/iSCSI target while it was red by clients ;) > caused non-critical data > corruption.:) not sure if it's true about any other > defragmentation tools or > your late 0.3.8 and 0.4.0 releases. perhaps i missed > something important > over ietd.conf that caused such a situation. i'm a > newbie, remember? > > anyhow, GREAT project, guys. but it would be even > more nice if you'd add > some how-tos. what type of howtos are you looking for? os specific, network configs, hba support, etc... ===== aaarrrggghhh!!!! FreeBSD rocks __________________________________ Do you Yahoo!? The all-new My Yahoo! - What will yours do? http://my.yahoo.com |
From: Ming Z. <mi...@el...> - 2004-12-15 17:35:27
|
On Wed, 2004-12-15 at 12:03, Anonymous wrote: > first - let's state the fact that i'm a total newbie. > > and then i would like to know if it's possible to make an iSCSI target out > of ISO9660 (cd-rom image file) using /dev/loop or any other method. tried a > few things with 0.3.7 and 0.3.8 using MS Initiator 1.06 (302) but it never > worked. and now i'm just asking if i did something wrong or it's not > possible that way at all and is a limitation of the particular > initiator/target realisation. you use a .iso file or cd in a cd-rom? > > i used debian woody with vanilla 2.4.28 and 0.3.7 compilled from source, > also the very same initiator from MS. 10 GB volume over 2x100 mbit bonding > (4 PCs running Windows 2000 Pro were involved in testing) worked fine untill > i tried to run Raxco PerfectDisk to defrag that NTFS formatted LVM > volume/iSCSI target while it was red by clients ;) caused non-critical data > corruption.:) not sure if it's true about any other defragmentation tools or > your late 0.3.8 and 0.4.0 releases. perhaps i missed something important > over ietd.conf that caused such a situation. i'm a newbie, remember? there is no parameter related with data corruption issue. what u mean is u run a defrag tool on a disk(lvm) volume exported by iet and get data corruption? > > anyhow, GREAT project, guys. but it would be even more nice if you'd add > some how-tos. > > > > ------------------------------------------------------- > SF email is sponsored by - The IT Product Guide > Read honest & candid reviews on hundreds of IT Products from real users. > Discover which products truly live up to the hype. Start reading now. > http://productguide.itmanagersjournal.com/ > _______________________________________________ > Iscsitarget-devel mailing list > Isc...@li... > https://lists.sourceforge.net/lists/listinfo/iscsitarget-devel |
From: Ming Z. <mi...@el...> - 2004-12-15 17:54:40
|
On Wed, 2004-12-15 at 12:51, Anonymous wrote: > ----- Original Message ----- > From: "Ming Zhang" <mi...@el...> > To: "Anonymous" <ne...@ho...> > Cc: "iet-dev" <isc...@li...> > Sent: Wednesday, December 15, 2004 7:35 PM > Subject: Re: [Iscsitarget-devel] (no subject) > > > > On Wed, 2004-12-15 at 12:03, Anonymous wrote: > > > first - let's state the fact that i'm a total newbie. > > > > > > and then i would like to know if it's possible to make an iSCSI target > out > > > of ISO9660 (cd-rom image file) using /dev/loop or any other method. > tried a > > > few things with 0.3.7 and 0.3.8 using MS Initiator 1.06 (302) but it > never > > > worked. and now i'm just asking if i did something wrong or it's not > > > possible that way at all and is a limitation of the particular > > > initiator/target realisation. > > you use a .iso file or cd in a cd-rom? > > > > > > > > i used debian woody with vanilla 2.4.28 and 0.3.7 compilled from source, > > > also the very same initiator from MS. 10 GB volume over 2x100 mbit > bonding > > > (4 PCs running Windows 2000 Pro were involved in testing) worked fine > untill > > > i tried to run Raxco PerfectDisk to defrag that NTFS formatted LVM > > > volume/iSCSI target while it was red by clients ;) caused non-critical > data > > > corruption.:) not sure if it's true about any other defragmentation > tools or > > > your late 0.3.8 and 0.4.0 releases. perhaps i missed something important > > > over ietd.conf that caused such a situation. i'm a newbie, remember? > > > there is no parameter related with data corruption issue. > > what u mean is u run a defrag tool on a disk(lvm) volume exported by iet > > and get data corruption? > > yes, that's the situation i had with 0.3.7, not sure about your late > releases. > > 10 GB blank (no fs) LVM volume exported as an IET iSCSI target > 4 PCs running Windows 2000 Pro with MS iSCSI Software Initiator 1.06 (Build > 302) > > on a workstation side i mounted the target as a basic disk and NTFS > formatted it. put some files on for testing. then i ran defrag while the > clients had some files from target open. that caused corruption. files that > were open when defrag took place were corrupted. will stand defrag program under windows also cause this? i can not test it without u program. so how to find a way to reproduce it :P |
From: Arne R. <arn...@xi...> - 2004-12-16 08:03:41
|
Am Mittwoch, den 15.12.2004, 19:03 +0200 schrieb Anonymous: > i used debian woody with vanilla 2.4.28 and 0.3.7 compilled from source, > also the very same initiator from MS. 10 GB volume over 2x100 mbit bonding > (4 PCs running Windows 2000 Pro were involved in testing) worked fine untill > i tried to run Raxco PerfectDisk to defrag that NTFS formatted LVM > volume/iSCSI target while it was red by clients ;) caused non-critical data > corruption.:) not sure if it's true about any other defragmentation tools or > your late 0.3.8 and 0.4.0 releases. perhaps i missed something important > over ietd.conf that caused such a situation. i'm a newbie, remember? Does W2K Pro with NTFS allow several hosts to access a volume concurrently? Arne -- Arne Redlich Xiranet Communications GmbH |
From: Ming Z. <mi...@el...> - 2004-12-16 15:03:23
|
On Thu, 2004-12-16 at 03:03, Arne Redlich wrote: > Am Mittwoch, den 15.12.2004, 19:03 +0200 schrieb Anonymous: > > > i used debian woody with vanilla 2.4.28 and 0.3.7 compilled from source, > > also the very same initiator from MS. 10 GB volume over 2x100 mbit bonding > > (4 PCs running Windows 2000 Pro were involved in testing) worked fine untill > > i tried to run Raxco PerfectDisk to defrag that NTFS formatted LVM > > volume/iSCSI target while it was red by clients ;) caused non-critical data > > corruption.:) not sure if it's true about any other defragmentation tools or > > your late 0.3.8 and 0.4.0 releases. perhaps i missed something important > > over ietd.conf that caused such a situation. i'm a newbie, remember? > > Does W2K Pro with NTFS allow several hosts to access a volume > concurrently? > no, he is not use several hosts. he just used several applications. :P > Arne |
From: Arne R. <arn...@xi...> - 2004-12-16 09:15:51
|
Am Donnerstag, den 16.12.2004, 10:50 +0200 schrieb Anonymous: > - > > Does W2K Pro with NTFS allow several hosts to access a volume > > concurrently? > > i'm not sure if i got your question right. what W2K Pro NTFS got to do with > IET iSCSI exported target so it would be able to allow-disallow access to it > to other clients-initiators? > > target was a blank LVM volume iSCSI exported from linux 2.4.28 host, > initiated and NTFS formatted on W2K Pro workstation. yes 0.3.7 allowed me to > access this target from several hosts-initiators simultaneously - as i've > said, 4 W2K Pro running PCs were involved in testing. everything went fine > at least in read-only mode. then i've set up an NTFS ACL on this NTFS volume > so 3 clients (users) could only read, while the 4-th one could read-write. Since I'm no Windows expert I just wanted to know if it is safe to access an NTFS'd (iSCSI) volume from several hosts at the same time. Most other file systems don't support it, so in this case the admin should make sure that only one initiator has access to an iSCSI volume, otherwise the fs will become corrupted. Arne -- Arne Redlich Xiranet Communications GmbH |
From: K C. <tec...@ya...> - 2004-12-16 10:06:36
|
--- Arne Redlich <arn...@xi...> wrote: > Am Donnerstag, den 16.12.2004, 10:50 +0200 schrieb > Anonymous: > > - > > > Does W2K Pro with NTFS allow several hosts to > access a volume > > > concurrently? > > > > i'm not sure if i got your question right. what > W2K Pro NTFS got to do with > > IET iSCSI exported target so it would be able to > allow-disallow access to it > > to other clients-initiators? > > > > target was a blank LVM volume iSCSI exported from > linux 2.4.28 host, > > initiated and NTFS formatted on W2K Pro > workstation. yes 0.3.7 allowed me to > > access this target from several hosts-initiators > simultaneously - as i've > > said, 4 W2K Pro running PCs were involved in > testing. everything went fine > > at least in read-only mode. then i've set up an > NTFS ACL on this NTFS volume > > so 3 clients (users) could only read, while the > 4-th one could read-write. > > Since I'm no Windows expert I just wanted to know if > it is safe to > access an NTFS'd (iSCSI) volume from several hosts > at the same time. > Most other file systems don't support it, so in this > case the admin > should make sure that only one initiator has access > to an iSCSI volume, > otherwise the fs will become corrupted. > > Arne > -- > Arne Redlich > Xiranet Communications GmbH i agree that ntfs is not a cluster file system. multiple machines accessing a volume will toast the data sooner or later. __________________________________ Do you Yahoo!? Read only the mail you want - Yahoo! Mail SpamGuard. http://promotions.yahoo.com/new_mail |
From: Ming Z. <mi...@el...> - 2004-12-16 15:03:56
|
ntfs does not support that. only a nfs or cfs support that. On Thu, 2004-12-16 at 04:15, Arne Redlich wrote: > Am Donnerstag, den 16.12.2004, 10:50 +0200 schrieb Anonymous: > > - > > > Does W2K Pro with NTFS allow several hosts to access a volume > > > concurrently? > > > > i'm not sure if i got your question right. what W2K Pro NTFS got to do with > > IET iSCSI exported target so it would be able to allow-disallow access to it > > to other clients-initiators? > > > > target was a blank LVM volume iSCSI exported from linux 2.4.28 host, > > initiated and NTFS formatted on W2K Pro workstation. yes 0.3.7 allowed me to > > access this target from several hosts-initiators simultaneously - as i've > > said, 4 W2K Pro running PCs were involved in testing. everything went fine > > at least in read-only mode. then i've set up an NTFS ACL on this NTFS volume > > so 3 clients (users) could only read, while the 4-th one could read-write. > > Since I'm no Windows expert I just wanted to know if it is safe to > access an NTFS'd (iSCSI) volume from several hosts at the same time. > Most other file systems don't support it, so in this case the admin > should make sure that only one initiator has access to an iSCSI volume, > otherwise the fs will become corrupted. > > Arne |
From: Ming Z. <mi...@el...> - 2005-04-13 03:03:01
|
maybe a dumb question, but why scsi command is "IN" while response is "OUT"? from ini point, a request is outbound right? 3.5.1.1. SCSI-Command This request carries the SCSI CDB and all the other SCSI execute command procedure call (see [SAM2]) IN arguments such as task attributes, Expected Data Transfer Length for one or both transfer directions (the latter for bidirectional commands), and Task Tag (as part of the I_T_L_x nexus). The SCSI-Response carries all the SCSI execute-command procedure call (see [SAM2]) OUT arguments and the SCSI execute-command procedure call return value. ming |
From: KUN H. <kun...@ug...> - 2005-04-18 02:45:40
|
Hi, Got this message when installing under kernel 2.6.11: [root@mayo iscsitarget-0.4.6]# /etc/init.d/iscsi-target start Starting iSCSI target service: modprobe: QM_MODULES: Function not implemented modprobe: QM_MODULES: Function not implemented modprobe: Can't locate module crc32c modprobe: QM_MODULES: Function not implemented modprobe: QM_MODULES: Function not implemented modprobe: Can't locate module iscsi_trgt nl_open -1 nl_fd : Connection refused [FAILED] ------------------------------------------------------ Googled result shows module-untility needs to be install, so: ... [root@mayo module-init-tools-3.1]# ./generate-modprobe.conf /etc/modprobe.conf modprobe: QM_MODULES: Function not implemented Warning: not translating path[toplevel]=/lib/modules/2.6 So what's the solution? Thanks! Kun Huang |
From: Ming Z. <mi...@el...> - 2005-04-18 02:50:56
|
what is u original linux distribution? try to install mod-utils correctly and make it load/unload modules smoothly before trying iet. this should not be the problem of iet. ming On Sun, 2005-04-17 at 22:45 -0400, KUN HUANG wrote: > Hi, > > Got this message when installing under kernel 2.6.11: > > [root@mayo iscsitarget-0.4.6]# /etc/init.d/iscsi-target start > Starting iSCSI target service: modprobe: QM_MODULES: Function > not implemented > > modprobe: QM_MODULES: Function not implemented > > modprobe: Can't locate module crc32c > modprobe: QM_MODULES: Function not implemented > > modprobe: QM_MODULES: Function not implemented > > modprobe: Can't locate module iscsi_trgt > nl_open -1 > nl_fd > : Connection refused > > [FAILED] > > > ------------------------------------------------------ > > Googled result shows module-untility needs to be install, so: > > ... > [root@mayo module-init-tools-3.1]# ./generate-modprobe.conf > /etc/modprobe.conf > modprobe: QM_MODULES: Function not implemented > > Warning: not translating path[toplevel]=/lib/modules/2.6 > > > So what's the solution? > > Thanks! > > Kun Huang > > > ------------------------------------------------------- > SF email is sponsored by - The IT Product Guide > Read honest & candid reviews on hundreds of IT Products from real users. > Discover which products truly live up to the hype. Start reading now. > http://ads.osdn.com/?ad_id=6595&alloc_id=14396&op=click > _______________________________________________ > Iscsitarget-devel mailing list > Isc...@li... > https://lists.sourceforge.net/lists/listinfo/iscsitarget-devel |
From: Jerry A. <je...@pb...> - 2005-04-18 03:37:29
|
Ming Zhang writes: > what is u original linux distribution? try to install mod-utils > correctly and make it load/unload modules smoothly before trying iet. > this should not be the problem of iet. > > ming > > > On Sun, 2005-04-17 at 22:45 -0400, KUN HUANG wrote: >> Hi, >> >> Got this message when installing under kernel 2.6.11: >> >> [root@mayo iscsitarget-0.4.6]# /etc/init.d/iscsi-target start >> Starting iSCSI target service: modprobe: QM_MODULES: Function >> not implemented >> >> modprobe: QM_MODULES: Function not implemented >> >> modprobe: Can't locate module crc32c You've enabling modules in building the kernel. Check http://joshwelch.com/stuff/ietd.html jerry |
From: Joshua G. <jo...@gr...> - 2005-08-19 18:36:22
|
> Hello, > > Is there any way to limit IET so that there can only be one initiator at a > time? Right now, I have the setting MaxConnections set to 1, but still > multiple Microsoft initiators can connect to the same disk on the target. http://sourceforge.net/mailarchive/message.php?msg_id=12637640 > I want to prevent this, can IET do this? Thanks again. > > Joshua Grauman > jnfo-a@gr... If your desired behaviour doesn't require anonymous access restriction based solely on the number of open sessions, but allows you to make use of the initiator's identity instead, you could use distinct CHAP accounts and/or IETs access control mechanism - cf. "IncomingUser" in ietd.conf and initiators.(allow|deny) in the tarball's etc/ directory. Arne |
From: Zeng, H. <har...@in...> - 2006-06-01 05:39:18
|
Application Engineer Platform Application Engineering Team Intel China Software Center iNet: 8-821-7219 Tel: 86-21-6116-7219 Fax: 86-21-3429-0697 |
From: Thomas W. <thw...@gm...> - 2006-07-25 16:43:07
|
From: mqheather<mqh...@ho...> - 2006-07-31 04:36:54
|
SGkgYWxsOg0KICAgICAgU29tZW9uZSB0b2xkIG1lIHRoYXQgdGhlcmUgaXMgYSBwYXJhbWV0ZXIg Y2FsbGVkIGJsb2NrIHNpemUgaW4gdGhlIGlTQ1NJIHRhcmdldCBtb2R1bGUgZHJpdmVyIGNhbiBi ZSBjaGFuZ2VkLCBidXQgSSBoYXZlbid0IGZvdW5kICBpdCBpbiB0aGUgc291cmNlIGNvZGUuIENh biB5b3UgZ2l2ZSBtZSBhIHRpcCBpZiB0aGVyZSBpbmRlZWQgaXMgYSBwYXJhbWV0ZXIgaW4gaVND U0kgdGFyZ2V0LiBUaGFuayB5b3UhDQogICAgICAgICAgICAgICAgICAgICAgICAgICAgICAgICAg ICAgICAgICAgICAgICAgICAgICAgICAgICAgICAgICAgICAgICAgICAgICAgICAgICAgICAgICAg ICAgICAgICAgICAgICAgICAgICAgICAgICAgICAgICAgICAgICAgICAgICAgICAgICAgICAgICAg ICAgICAgICAgICAgICAgICAgICAgICAgICAgICAgICAgICAgICAgICAgICAgaGVhdGhlcg0KDQpt cWhlYXRoZXIsIG1xaGVhdGhlcjUyMkBob3RtYWlsLmNvbQ0KMjAwNi0wNy0zMSANCg== |
From: <xig...@gm...> - 2007-01-30 17:27:35
|
From: <st...@wp...> - 2007-04-11 10:27:49
Attachments:
hardware+config.desc
mpiotest_all.sh
|
Hi. I did performance tests and have strange results. Attached: test script, hardware and config description. Target debian 4GB RAM Initiator fedora 256 MB RAM File for fileIO was on xfs. File system on iscsi disk was ext3. Network have jumbo frames. I use rr_min_io = 2 in /etc/multipath.conf and /sys/bloc/sbX/queue/max_sectors_kb = 32, otherwise I have no benefits for mpio. Values in MB/s mpio_fileio mpio_blockio spio_fileio spio_blockio direct single write 200 155 113 109 direct single read 199 94 96 104 direct multiple writes 205 157 111 109 direct multiple reads 83 89 89 109 direct multiple w/r 95/55 54/46 87/56 60/52 cached single write 150 85 108 115 cached single read 98 62 79 38 cached multiple writes 90 67 109 104 cached multiple reads 82 83 64 63 cached multiple w/r 87/47 30/21 113/42 78/32 I can understand why direct reads for mpio_blockio are slower then for spio_blockio, because request are more fragmented and there are more seeks on hard drives. I can't understand results for cached IO (default on linux): 1) why fileio is better then blockio in most cases 2) why spio_blockio is better then mpio_blockio in most cases 3) why so slow single read for spio_blockio 4) if this is problem in virtual memory or file system layer on initiator, why results differ. -- Regards Stanislaw Gruszka ---------------------------------------------------- Rozlicz się sercem - przekaż 1% podatku. 1% Twojego podatku może uratować życie dziecka! Jak przekazać 1% podatku potrzebującym? Kliknij i zobacz: http://klik.wp.pl/?adr=www.jedenprocent.wp.pl&sid=1086 |
From: Ming Z. <bla...@gm...> - 2007-04-11 13:19:22
|
On Wed, 2007-04-11 at 12:27 +0200, Stanisław Gruszka wrote: > Hi. > > I did performance tests and have strange results. > Attached: test script, hardware and config description. > > Target debian 4GB RAM > Initiator fedora 256 MB RAM > File for fileIO was on xfs. > File system on iscsi disk was ext3. > Network have jumbo frames. u forgot one big part of the puzzle here. what is your storage? > > I use rr_min_io = 2 in /etc/multipath.conf and > /sys/bloc/sbX/queue/max_sectors_kb = 32, > otherwise I have no benefits for mpio. u mpio is in iscsi ini side or target side? > > Values in MB/s > > mpio_fileio mpio_blockio spio_fileio spio_blockio > direct single write 200 155 113 109 > direct single read 199 94 96 104 > direct multiple writes 205 157 111 109 > direct multiple reads 83 89 89 109 > direct multiple w/r 95/55 54/46 87/56 60/52 > > cached single write 150 85 108 115 > cached single read 98 62 79 38 > cached multiple writes 90 67 109 104 > cached multiple reads 82 83 64 63 > cached multiple w/r 87/47 30/21 113/42 78/32 > > I can understand why direct reads for mpio_blockio are slower then > for spio_blockio, because request are more fragmented and there are more > seeks on hard drives. > > I can't understand results for cached IO (default on linux): > 1) why fileio is better then blockio in most cases u fileio is write back or write through? > 2) why spio_blockio is better then mpio_blockio in most cases > 3) why so slow single read for spio_blockio > 4) if this is problem in virtual memory or file system layer on initiator, > why results differ. > > -- > Regards > Stanislaw Gruszka > > ---------------------------------------------------- > Rozlicz się sercem - przekaż 1% podatku. > 1% Twojego podatku może uratować życie dziecka! > Jak przekazać 1% podatku potrzebującym? Kliknij i zobacz: > http://klik.wp.pl/?adr=www.jedenprocent.wp.pl&sid=1086 > ------------------------------------------------------------------------- > Take Surveys. Earn Cash. Influence the Future of IT > Join SourceForge.net's Techsay panel and you'll get the chance to share your > opinions on IT & business topics through brief surveys-and earn cash > http://www.techsay.com/default.php?page=join.php&p=sourceforge&CID=DEVDEV > _______________________________________________ Iscsitarget-devel mailing list Isc...@li... https://lists.sourceforge.net/lists/listinfo/iscsitarget-devel |
From: Stanislaw G. <st...@wp...> - 2007-04-12 08:43:54
|
On Wednesday 11 April 2007 13:19, Ming Zhang wrote: > u forgot one big part of the puzzle here. what is your storage? Storage is raid0 of 8 disks, here are test results of my script on 50GB volume with ext3 filesystem on this device: direct single write 301 direct single read 178 direct multiple writes 214 direct multiple reads 148 direct multiple w/r 262/102 cached single write 302 cached single read 743 cached multiple writes 222 cached multiple reads 74 cached multiple w/r 53/54 > > I use rr_min_io = 2 in /etc/multipath.conf and > > /sys/bloc/sbX/queue/max_sectors_kb = 32, > > otherwise I have no benefits for mpio. > > u mpio is in iscsi ini side or target side? mpio is on ini side through iscsi handled by multipath > > I can't understand results for cached IO (default on linux): > > 1) why fileio is better then blockio in most cases > > u fileio is write back or write through? write back -- Cheers Stanislaw Gruszka |
From: Ming Z. <bla...@gm...> - 2007-04-12 13:53:41
|
On Thu, 2007-04-12 at 09:46 +0000, Stanislaw Gruszka wrote: > On Wednesday 11 April 2007 13:19, Ming Zhang wrote: > > u forgot one big part of the puzzle here. what is your storage? > Storage is raid0 of 8 disks, here are test results of my script on 50GB volume > with ext3 filesystem on this device: feel we are quite off list now. ;) HW raid or SW raid? if HW raid, what is the controller and how many cache onboard? write back or write through? what u script do? when u say multiple read/write, how many concurrent streams you have? > > direct single write 301 > direct single read 178 > direct multiple writes 214 > direct multiple reads 148 > direct multiple w/r 262/102 > > cached single write 302 > cached single read 743 > cached multiple writes 222 > cached multiple reads 74 > cached multiple w/r 53/54 > > > > I use rr_min_io = 2 in /etc/multipath.conf and > > > /sys/bloc/sbX/queue/max_sectors_kb = 32, > > > otherwise I have no benefits for mpio. > > > > u mpio is in iscsi ini side or target side? > mpio is on ini side through iscsi handled by multipath > > > > I can't understand results for cached IO (default on linux): > > > 1) why fileio is better then blockio in most cases > > > > u fileio is write back or write through? > write back > |
From: Stanislaw G. <st...@wp...> - 2007-04-13 08:43:44
|
T24gVGh1cnNkYXkgMTIgQXByaWwgMjAwNyAxMzo1MywgTWluZyBaaGFuZyB3cm90ZToKPiBIVyBy YWlkIG9yIFNXIHJhaWQ/IGlmIEhXIHJhaWQsIHdoYXQgaXMgdGhlIGNvbnRyb2xsZXIgYW5kIGhv dyBtYW55Cj4gY2FjaGUgb25ib2FyZD8gd3JpdGUgYmFjayBvciB3cml0ZSB0aHJvdWdoPwpIVyBy YWlkIHdpdGggMjU2IE1CIHdyaXRlIGJhY2sgY2FjaGUuCgo+IHdoYXQgdSBzY3JpcHQgZG8/IHdo ZW4gdSBzYXkgbXVsdGlwbGUgcmVhZC93cml0ZSwgaG93IG1hbnkgY29uY3VycmVudAo+IHN0cmVh bXMgeW91IGhhdmU/CnNpbmdlbCB3cml0ZSwgc2luZ2xlIHJlYWQ6IDF4MUdCIAptdWx0aXBsZSB3 cml0ZXMgLCBtdWx0aXBsZSB3cml0ZXMgOiA2eDFHQgptdWx0aXBsZSB3L3IgOjN4MUdCIHdyaXRl cyAzeDFHQiByZWFkcyBzaW11bHRhbmVvdXNseSAKQWxsIHJlYWRzL3dyaXRlcyB3YXMgbWFraW5n IGRkIHRvb2wsIApJIG1lYW4gImNhY2hlZCIgd2hlbiBJIGRpZCBub3JtYWwgZGQgb3BlcmF0aW9u cyAKSSBtZWFuICJkaXJlY3QiIHdoZW4gSSBkaWQgZGQncyB3aXRoIG9mbGFnL2lmbGFnPWRpcmVj dCAoSSB0aGluayBmaWxlIGlzIG9wZW4gCndpdGggT19ESVJFQ1QgZmxhZyB0aGVuKQogCj4+IKCg oKCgoKCgoKCgoKCgoKCgoKCgoKCgoKCgoKCgoKCgoKCgoKCgbXBpb19maWxlaW+goKCgoG1waW9f YmxvY2tpb6CgoKBzcGlvX2ZpbGVpb6CgoKCgoKCgoKCgoKBzcGlvX2Jsb2NraW8KPj4gZGlyZWN0 IHNpbmdsZSAKd3JpdGWgoKCgoKCgoKCgoDIwMKCgoKCgoKCgoKCgoKCgoKCgoKCgoDE1NaCgoKCg oKCgoKCgoKCgoKCgoKCgoDExM6CgoKCgoKCgoKCgoKCgoKCgoKCgoDEwOQo+PiBkaXJlY3Qgc2lu Z2xlIApyZWFkoKCgoKCgoKCgoKCgMTk5oKCgoKCgoKCgoKCgoKCgoKCgoKCgOTSgoKCgoKCgoKCg oKCgoKCgoKCgoKCgOTagoKCgoKCgoKCgoKCgoKCgoKCgoKCgMTA0Cj4+IGRpcmVjdCBtdWx0aXBs ZSAKd3JpdGVzoKCgoKCgoKCgoKCgoKCgoDIwNaCgoKCgoKCgoKCgoKCgoKCgoKCgoDE1N6CgoKCg oKCgoKCgoKCgoKCgoKCgoDExMaCgoKCgoKCgoKCgoKCgoKCgoKCgoDEwOQo+PiBkaXJlY3QgbXVs dGlwbGUgCnJlYWRzoKCgoKCgoKCgODOgoKCgoKCgoKCgoKCgoKCgoKCgoKCgODmgoKCgoKCgoKCg oKCgoKCgoKCgoKCgODmgoKCgoKCgoKCgoKCgoKCgoKCgoKCgMTA5Cj4+IGRpcmVjdCBtdWx0aXBs ZSAKdy9yoKCgoKCgoKCgoKA5NS81NaCgoKCgoKCgoKCgNTQvNDagoKCgoKCgoKCgoDg3LzU2oKCg oKCgoKCgoKA2MC81Mgo+PiAKPj4gY2FjaGVkIHNpbmdsZSAKd3JpdGWgoKCgoKCgoKCgoDE1MKCg oKCgoKCgoKCgoKCgoKCgoKCgoDg1oKCgoKCgoKCgoKCgoKCgoKCgoKCgoDEwOKCgoKCgoKCgoKCg oKCgoKCgoKCgoDExNQo+PiBjYWNoZWQgc2luZ2xlIApyZWFkoKCgoKCgoKCgoKCgOTigoKCgoKCg oKCgoKCgoKCgoKCgoKCgNjKgoKCgoKCgoKCgoKCgoKCgoKCgoKCgNzmgoKCgoKCgoKCgoKCgoKCg oKCgoKCgMzgKPj4gY2FjaGVkIG11bHRpcGxlIAp3cml0ZXOgoKCgoKCgoDkwoKCgoKCgoKCgoKCg oKCgoKCgoKCgoDY3oKCgoKCgoKCgoKCgoKCgoKCgoKCgoDEwOaCgoKCgoKCgoKCgoKCgoKCgoKCg oDEwNAo+PiBjYWNoZWQgbXVsdGlwbGUgCnJlYWRzoDgyoKCgoKCgoKCgoKCgoKCgoKCgoKCgoDgz oKCgoKCgoKCgoKCgoKCgoKCgoKCgoDY0oKCgoKCgoKCgoKCgoKCgoKCgoKCgoDYzCj4+IGNhY2hl ZCBtdWx0aXBsZSAKdy9yIKCgoKCgoKCgoKA4Ny80N6CgoKCgoKCgoKCgMzAvMjGgoKCgoKCgoKCg oDExMy80MqCgoKCgoKCgoKA3OC8zMgo+PiAKPj4gSSBjYW4gdW5kZXJzdGFuZCB3aHkgZGlyZWN0 IHJlYWRzIGZvciBtcGlvX2Jsb2NraW8gYXJlIHNsb3dlciB0aGVuCj4+IGZvciBzcGlvX2Jsb2Nr aW8sIGJlY2F1c2UgcmVxdWVzdCBhcmUgbW9yZSBmcmFnbWVudGVkIGFuZCB0aGVyZSBhcmUgbW9y ZSAKPj4gc2Vla3Mgb24gaGFyZCBkcml2ZXMuCj4+IAo+PiBJIGNhbid0IHVuZGVyc3RhbmQgcmVz dWx0cyBmb3IgY2FjaGVkIElPIChkZWZhdWx0IG9uIGxpbnV4KToKPj4gMSkgd2h5IGZpbGVpbyBp cyBiZXR0ZXIgdGhlbiBibG9ja2lvIGluIG1vc3QgY2FzZXMKPgo+d2l0aCBmaWxlaW8gYXMgd3Jp dGUgYmFjaywgaXQgaXMgcXVpdGUgcG9zc2libGUuCj4KPj4gMikgd2h5IHNwaW9fYmxvY2tpbyBp cyBiZXR0ZXIgdGhlbiBtcGlvX2Jsb2NraW8gaW4gbW9zdCBjYXNlcwo+Cj53aG8gc2FpZCBtcGlv IHdpbGwgaW5jcmVhc2UgdGhlIHBlcmZvcm1hbmNlIGZvciBzdXJlPyA7KSAKSXQgaW5jcmVhc2Ug d2hlbiBuZXR3b3JrIGlzIGJvdHRsZSBuZWNrLiBXZSBjYW4gc2VlIHRoaXMgd2l0aCBtcGlvX2Zp bGVpbyBmb3IgCnNpbmdsZSByZWFkL3dyaXRlIGFuZCBtdWx0aXBsZSB3cml0ZXMsIHdoZW4gcGFn ZSBjYWNoZSBvbiB0YXJnZXQgc3RvcmUgYWxtb3N0IAphbGwgZGF0YS4KCj4+cnVuIDIgY2FzZXMg YWdhaW4sIGNhcHR1cmUgc29tZSB0Y3BkdW1wIHRyYWNlcywgd2UgY2FuIHRlbGwgbW9yZS4gbW9z dAo+PnN0aWxsIGJlY2F1c2Ugb2Ygc2VlayBpIGZlZWwuCkhhcmR3YXJlIHdoaWNoIEkgd2FzIHVz ZWQgaXMgYWxyZWFkeSB1c2luZyBmb3Igb3RoZXIgdGVzdHMsIHNvIEkgY2FuJ3QgZG8gYW55IApk dW1wcyBub3csIEkgZG9uJ3Qga25vdyB3aGVuIGhhcmR3YXJlIHdpbGwgYmUgZnJlZS4gCgo+PiAz KSB3aHkgc28gc2xvdyBzaW5nbGUgcmVhZCBmb3Igc3Bpb19ibG9ja2lvCj4KPm5vIHJlYWQgYWhl YWQKRm9yIGRpcmVjdCBJTyBpcyB0aGUgc2FtZSByZWFkIGFoZWFkIGFuZCByZXN1bHRzIGRpZmZl ciAoZGlyZWN0IHNwaW9fYmxvY2tpbyAKaXMgdmVyeSBnb29kKS4KClBhdGggZm9yIGRhdGEgZnJv bSB1c2VyIHNwYWNlIGluIGluaXRpYXRvciB0byBkbyBkaXNrIG9uIHRhcmdldCBpcyBsb25nOgoK LSBmaWxlc3lzdGVtIG9uIGluaXRpYXRvcgotIHBhZ2UgY2FjaGUgb24gaW5pdGlhdG9yIChieXBh c3NlZCB3aXRoIGRpcmVjdCBJTykKLSBkZXZpY2UgbWFwcGVyIChvbmx5IHdpdGggbXBpbykKLSBi bG9jayBsYXllciBhbmQgYmxvY2sgZGV2aWNlIGRyaXZlciBvbiBpbml0aWF0b3IKLSBpc2NzaSBp bml0aWF0b3IgCi0gbmV0d29yayBsYXllciBhbmQgbmljIGRyaXZlciBvbiBpbml0aWF0b3IgCi0g bmV0d29yawotIG5ldHdvcmsgbGF5ZXIgYW5kIG5pYyBkcml2ZXIgb24gdGFyZ2V0Ci0gaWV0IAot IGZpbGVzeXN0ZW0gb24gdGFyZ2V0IChieXBhc3NlZCB3aXRoIGJsb2NraW8pCi0gcGFnZSBjYWNo ZSBvbiB0YXJnZXQgKGJ5cGFzc2VkIHdpdGggYmxvY2tpbykKLSBibG9jayBsYXllciAoZGV2aWNl IG1hcHBlciB3aXRoIGx2bSkKLSBibG9jayBkZXZpY2UgZHJpdmVyCi0gaGFyZHdhcmUgcmFpZCBj b250cm9sbGVyCgpBbGwgdGhpcyBsYXllcnMgaGF2ZSBpbXBhY3Qgb24gcGVyZm9ybWFuY2UuIEkg aGF2ZSBtYWtlIHBlcmZvcm1hbmNlIGNvbXBhcmlzb24gCndoZW4gc29tZSBvZiB0aGlzIGxheWVy cyBhcmUgYmF5cGFzc2VkLCBwYXJhbWV0ZXJzIG9mIG90aGVyIHdhcyBub3QgY2hhbmdlZC4KSSBk b24ndCB0ZWxsIGlldCBoYXZlIGJhZCBwZXJmb3JtYW5jZSwgaXQncyByYXRoZXIgb3ZlcmFsbCBz eXN0ZW0ncyBwcm9ibGVtIAp0aGF0IHRoZSBjYWNoZWQgSU8gb24gaW5pdGlhdG9yIGlzIHNvIHNs b3cgYW5kIHN0cmFuZ2VseSBkaWZmZXIgd2hlbiBvbiAKdGFyZ2V0IGlzIGJsb2NraW8gb3IgZmls ZWlvLiAgCgotLSAKU3RhbmlzbGF3IEdydXN6a2EK |
From: Ming Z. <bla...@gm...> - 2007-04-13 13:26:51
|
On Fri, 2007-04-13 at 10:42 +0000, Stanislaw Gruszka wrote: > On Thursday 12 April 2007 13:53, Ming Zhang wrote: > > HW raid or SW raid? if HW raid, what is the controller and how many > > cache onboard? write back or write through? > HW raid with 256 MB write back cache. > > > what u script do? when u say multiple read/write, how many concurrent > > streams you have? > singel write, single read: 1x1GB > multiple writes , multiple writes : 6x1GB > multiple w/r :3x1GB writes 3x1GB reads simultaneously > All reads/writes was making dd tool, > I mean "cached" when I did normal dd operations > I mean "direct" when I did dd's with oflag/iflag=direct (I think file is open > with O_DIRECT flag then) dd 1Gb is not much. especially when u target has 4GB ram. target is a 64bit OS or 32bit one? > > >> mpio_fileio mpio_blockio spio_fileio spio_blockio > >> direct single > write 200 155 113 109 > >> direct single > read 199 94 96 104 > >> direct multiple > writes 205 157 111 109 > >> direct multiple > reads 83 89 89 109 > >> direct multiple > w/r 95/55 54/46 87/56 60/52 > >> > >> cached single > write 150 85 108 115 > >> cached single > read 98 62 79 38 > >> cached multiple > writes 90 67 109 104 > >> cached multiple > reads 82 83 64 63 > >> cached multiple > w/r 87/47 30/21 113/42 78/32 > >> > >> I can understand why direct reads for mpio_blockio are slower then > >> for spio_blockio, because request are more fragmented and there are more > >> seeks on hard drives. > >> > >> I can't understand results for cached IO (default on linux): > >> 1) why fileio is better then blockio in most cases > > > >with fileio as write back, it is quite possible. > > > >> 2) why spio_blockio is better then mpio_blockio in most cases > > > >who said mpio will increase the performance for sure? ;) > It increase when network is bottle neck. We can see this with mpio_fileio for > single read/write and multiple writes, when page cache on target store almost > all data. eventually it will reach disk. and as u said, fileio can cache, but blockio does not. so mpio_fileio is good does not mean mpio_blockio should be good as well. > > >>run 2 cases again, capture some tcpdump traces, we can tell more. most > >>still because of seek i feel. > Hardware which I was used is already using for other tests, so I can't do any > dumps now, I don't know when hardware will be free. ic > > >> 3) why so slow single read for spio_blockio > > > >no read ahead > For direct IO is the same read ahead and results differ (direct spio_blockio > is very good). blockio does not have readahead in target side as well.. > > Path for data from user space in initiator to do disk on target is long: > > - filesystem on initiator > - page cache on initiator (bypassed with direct IO) > - device mapper (only with mpio) > - block layer and block device driver on initiator > - iscsi initiator > - network layer and nic driver on initiator > - network > - network layer and nic driver on target > - iet > - filesystem on target (bypassed with blockio) > - page cache on target (bypassed with blockio) > - block layer (device mapper with lvm) > - block device driver > - hardware raid controller > > All this layers have impact on performance. I have make performance comparison > when some of this layers are baypassed, parameters of other was not changed. > I don't tell iet have bad performance, it's rather overall system's problem > that the cached IO on initiator is so slow and strangely differ when on > target is blockio or fileio. > no, i am here not to defense the IET, in fact, i like to see IET sucks in some area then we find out why and improve it. This is exactly what Ross did. He think fileio does not fit his situation and Andre happened to post a blkio code. Then we refine it and get it merged. Ming |