Learn how easy it is to sync an existing GitHub or Google Code repo to a SourceForge project! See Demo

Close

#52 dynamo stops sending data to ramdisk

open
nobody
None
5
2015-01-11
2006-03-22
Vishnu
No

IOMeter source for linux downloaded from the
iometer-devel files on sourceforge.

Only GUI on Windows, with Dynamo on Linux x86_64 Dual
AMD Opteron with Open iSCSI initiator or other iSCSI
initiators, using a LUN from a Chelsio iSCSI Target
1Gig size ramdisk LUN,
with 2 10Gig Chelsio fiber cards back to back, one
running the iSCSI initiator, one running the iSCSI target.

The dynamo starts sending data for about 20-30 seconds,
and then stops sending data, and the performance
numbers shown in the GUI jump by about 2x to 3x the
earlier real numbers.
The fact that data is not being sent is verified by
there being no activity on the 10Gig back-to-back link,
and through iostat and top not showing any cpu usage
for the iscsi initiator and target.

Dynamo does not trip in this way with regular scsi /
sata / sas / raid devices and continues doing io to them.

Most probably Dynamo is not handling the very small
/low latencies with the RAMDISK.

There are no other issues with the setup, because other
linux utilities can access the same LUN, through the
same iscsi and tcp connection, after Dynamo has stopped
sending data to it. (utils like fdisk / mkfs, other io
tools, and the activity on the link is there for those).

Discussion

  • Ming Zhang
    Ming Zhang
    2006-03-23

    Logged In: YES
    user_id=896657

    could u try and see if u can reproduce this with a 1Gb link?
    maybe that 10Gb NIC u have can be set to run at 1Gb as well.

     
  • Vishnu
    Vishnu
    2006-04-28

    Logged In: YES
    user_id=1482583

    Tried with 1Gbps link, same thing happens.
    This happens with the 2004.07.30 version linux dynamo, and
    with the latest devel download, compiled on linux x86_64.

    The problem seems to be that the dynamo is sending very
    little data across. Ethereal trace attached for 128K seq writes.

    ***This is only with latency being in microseconds, when
    doing IO to a ramdisk.

    **If regular hard disks are used, and latency is in
    milliseconds, the IO goes through correctly, and the IO
    throughput reported is correct.

     
  • Vishnu
    Vishnu
    2006-04-28

    Ethereal Trace

     
  • Ming Zhang
    Ming Zhang
    2006-04-28

    Logged In: YES
    user_id=896657

    what is u testing spec? i saw a similar bug report on iscsi
    enterprise target list these days. with 5 works and 128
    outstanding queues. the iscsi connection will break soon.

    i will check this with my 1Gb link.

     
  • Ming Zhang
    Ming Zhang
    2006-04-28

    Logged In: YES
    user_id=896657

    i performed a test with dynamo on windows, 10 workers, each
    128 outstanding io, on a iscsi target with ram disk, over
    1gb link. it is ok. i do not have 10Gb nic, so can not do
    further test.

     
  • Vishnu
    Vishnu
    2006-04-28

    Logged In: YES
    user_id=1482583

    I am using Linux RH 9 with UNH 1.60 initiator on kernel
    2.4.29, with dynamo 2004.07.30.

    with the 2004.07.30.post dynamo, i was using open-iscsi
    1.0-485, on 2.6.12 x86_64 kernel.

    This happens only with Linux dynamo, never seen this on windows.

    This will happen when using ramdisk on iscsi target, with
    iscsi initiator on linux, and dynamo on linux, on 1Gb or
    10Gb links.

     
  • Ming Zhang
    Ming Zhang
    2006-04-29

    Logged In: YES
    user_id=896657

    so this is with linux dynamo. what is your access pattern. a
    sequential workload, or random, how many workers, how many
    outstandting request number, the size of ram disk, the
    dynamo box hardware configuration... the more information,
    the better. u can send email to iomete-devel list if place
    here is limited.

     
  • Vishnu
    Vishnu
    2006-04-30

    Logged In: YES
    user_id=1482583

    Initiator / Linux Dynamo system configuration:
    AMD Opteron x 2, 2.4GHz, 2GB memory, Linux 2.6.12 x86_64
    SMP, Rioworks HDAMA motherboard, 1GB Broadcom tg3 integrated
    NICs.
    Linux Dynamo 2004.07.30.POST, with Windows GUI,
    Access patterns tried out:
    1. 64k 100% sequential read
    2. 128k 100% sequential read
    3. 256k 100% sequential read
    4. 128k 100% sequential write
    5. 256k 100% sequential write
    6. 1MB 100% sequential read
    4 workers, queue depth 1.
    1 disk drive, /dev/sda, shown by iscsi initiator.

    iSCSI Target system configuration:
    2 x AMD Opteron 2.4GHz, 2GB memory, Linux 2.6.14 x86_64 SMP,
    1Gb Broadcom tg3 integrated NICs, Rioworks HDAMA motherboard
    Linux Open enterprise target (ietd), with /dev/ram1, 4 / 16
    / 32 MB ramdisk sizes.

    1Gbps network, back to back or through 1Gig switch.
    IO latency below 1 millisec.

    Hope this is useful.

     
  • Vishnu
    Vishnu
    2006-07-21

    Logged In: YES
    user_id=1482583

    Hi,

    have you been able to reproduce this and find a solution?
    please let me know.. thanks.. any progress on this would be
    great news!!

     
  • Ming Zhang
    Ming Zhang
    2006-07-24

    Logged In: YES
    user_id=896657

    if you run IET with it, instead of using /dev/ram1 which is
    so small in size, could you try to use
    "Lun X Type=NULLIO". this can give you a much larger size
    ram disk. repeat and see if still can reproduceable.

    with 32MB/s ram disk, client side ram can hold it in cache
    and never need to send out request again.