Thread: FRESH_CONNECT=1 and TIMER_AFTER_URL_SLEEP =5000
Status: Alpha
Brought to you by:
coroberti
|
From: BuraphaLinux S. <bur...@gm...> - 2007-08-02 10:49:37
|
Hello,
I have a severe problem using curl-loader with vsftpd. I
reported it once before but I guess I didn't supply enough
information. I use
FRESH_CONNECT=1
and I use
TIMER_AFTER_URL_SLEEP=5000
So what I expect to happen is that after the download and the
connection is closed, there will be a 5 second pause, then a new
connection will be opened. Unfortuantely this is not what happens. I
guess that after the download there is a 5 second pause, then a close
and reopen almost instantly. The problem is that on my ftp server it
takes time to teardown the socket and end the process. Most human
users would have a gap while they pick something else and click on it.
Automated tools would reuse the connection (no FRESH_CONNECT=1). I
am trying to simulate humans hitting the ftp site quickly, but not
instantly. I have 'max_clients' set in my vsftpd.conf file to limit
the total number of people connecting.
What happens is that after one batch of URLs is fetched, the
curl-loader goes insane.
It tries to connect instanatly and goes into an ultra-fast loop doing
that. Well the ftp server correctly just refuses everything at that
point since curl-loader is opening hundreds of connections without
waiting between failures. The curl-loader does not slow down when it
gets notified there are too many connections from the server. I can
see parsing that for many different servers may not be practical and I
have no problem with that. I do need a configurable pause between
connection attempts however.
I want to have the system pause after disconnection when I'm using
FRESH_CONNECT=1 and I guess we need some new TIMER_BETWEEN_CONNECTIONS
or some such.
Before I go and code that, I would like to know if it can already be
done some other way. I can provide wireshark traces if you need them,
ftp logs, etc. I am using curl_loader-0.41.
Thank you,
JGH
|
|
From: Robert I. <cor...@gm...> - 2007-08-02 11:27:09
|
On 8/2/07, BuraphaLinux Server <bur...@gm...> wrote: > > Hello, Nice hearing from you. FRESH_CONNECT=1 > TIMER_AFTER_URL_SLEEP=5000 > > So what I expect to happen is that after the download and the > connection is closed, there will be a 5 second pause, then a new > connection will be opened. Unfortuantely this is not what happens. I > guess that after the download there is a 5 second pause, then a close > and reopen almost instantly. My understanding here is that, when FRESH_CONNECT is set, we need: 1) to close the connection; 2) only after the connection is closed to sleep for TIMER_AFTER_URL_SLEEP; Please, confirm, that I got it. Thanks. In any case your proposal is "The Right Thing to Do" (myTM) :) What happens is that after one batch of URLs is fetched, the > curl-loader goes insane. > It tries to connect instanatly and goes into an ultra-fast loop doing > that. Well the ftp server correctly just refuses everything at that > point since curl-loader is opening hundreds of connections without > waiting between failures. > It is actually DOS-ing. Hope, that this will not be the primary usage of the tool. :) I will try next week (Sunday/Monday) to come with this correction and will inform you, when committed. My best wishes. Sincerely, Robert Iakobashvili, coroberti %x40 gmail %x2e com ........................................................... http://curl-loader.sourceforge.net A web testing and traffic generation tool. |
|
From: BuraphaLinux S. <bur...@gm...> - 2007-08-02 13:11:36
|
On 8/2/07, Robert Iakobashvili <cor...@gm...> wrote:
>
> On 8/2/07, BuraphaLinux Server <bur...@gm... > wrote:
> > Hello,
>
> Nice hearing from you.
>
> > FRESH_CONNECT=1
> > TIMER_AFTER_URL_SLEEP=5000
> >
> > So what I expect to happen is that after the download and the
> > connection is closed, there will be a 5 second pause, then a new
> > connection will be opened. Unfortuantely this is not what happens. I
> > guess that after the download there is a 5 second pause, then a close
> > and reopen almost instantly.
>
> My understanding here is that, when FRESH_CONNECT is set,
> we need:
> 1) to close the connection;
> 2) only after the connection is closed to sleep for TIMER_AFTER_URL_SLEEP;
>
> Please, confirm, that I got it. Thanks.
> In any case your proposal is "The Right Thing to Do" (myTM)
> :)
>
> > What happens is that after one batch of URLs is fetched, the
> > curl-loader goes insane.
> > It tries to connect instanatly and goes into an ultra-fast loop doing
> > that. Well the ftp server correctly just refuses everything at that
> > point since curl-loader is opening hundreds of connections without
> > waiting between failures.
> >
>
> It is actually DOS-ing. Hope, that this will not be the primary usage of the
> tool.
> :)
>
> I will try next week (Sunday/Monday) to come with this correction and will
> inform you,
> when committed.
>
> My best wishes.
>
> Sincerely,
> Robert Iakobashvili,
Hello,
Yes, you understand the problem and what the solution is. I look
forward to testing your fix. Thanks!
JGH
|
|
From: Robert I. <cor...@gm...> - 2007-08-05 08:36:33
|
On 8/2/07, BuraphaLinux Server <bur...@gm...> wrote: > > On 8/2/07, Robert Iakobashvili <cor...@gm...> wrote: > > > > My understanding here is that, when FRESH_CONNECT is set, > > we need: > > 1) to close the connection; > > 2) only after the connection is closed to sleep for > TIMER_AFTER_URL_SLEEP; > > Yes, you understand the problem and what the solution is. I look > forward to testing your fix. Thanks! > > JGH > Please, try the latest svn version. Unfortunately, I am short in time to test it properly and only run a small test. Please, test, if it works for you. -- Sincerely, Robert Iakobashvili, coroberti %x40 gmail %x2e com ........................................................... http://curl-loader.sourceforge.net A web testing and traffic generation tool. |
|
From: BuraphaLinux S. <bur...@gm...> - 2007-08-05 11:39:15
|
Hello,
In testing I did not see any difference. In the logs I get blocks
like this when curl-loader is cycling too fast (sorry gmail wraps
badly):
: eff-url: ftp://anonymous:joe%040@10.16.68.213/apache/httpd/httpd-2.0.59.tar.bz2,
url:
121 20 (10.16.68.197) <= Recv header: eff-url:
ftp://anonymous:joe%040@10.16.68.213/apache/httpd/httpd-2.0.59.tar.bz2,
url:
121 20 (10.16.68.197) :<= WARNING: parsing error: wrong response code (FTP?) 0 .
121 20 (10.16.68.197) !! ERROR: This doesn't seem like a nice
ftp-server response
: eff-url: ftp://anonymous:joe%040@10.16.68.213/apache/httpd/httpd-2.0.59.tar.bz2,
url:
121 20 (10.16.68.197) :== Info: Expire cleared
: eff-url: ftp://anonymous:joe%040@10.16.68.213/apache/httpd/httpd-2.0.59.tar.bz2,
url:
121 20 (10.16.68.197) :== Info: Closing connection #117
I worry that 'parsing error' means it doesn't do the delay, but I
don't really know.
The server side of this will say this in the vsftpd log file:
Sun Aug 5 18:17:58 2007 [pid 10839] CONNECT: Client "10.16.68.197",
"Connection refused: too many sessions."
Using ss -tan (or netstat) I can verify there are too many connections
on the server.
The vsftpd server has been set for 100 clients with these entries in
vsftpd.conf:
#
# Maximum number of simultaneous clients
#
max_clients=100
#
# Maximum number of connections from 1 ip
#
max_per_ip=100
(and I painfully tested by hand that this limit works as expected)
My configuration is ftp.small for curl-loader and is this:
########### GENERAL SECTION ################################
BATCH_NAME= ftpsmall
CLIENTS_NUM_MAX=50 # Same as CLIENTS_NUM
CLIENTS_NUM_START=5
CLIENTS_RAMPUP_INC=5
INTERFACE=eth0
NETMASK=32
IP_ADDR_MIN=10.16.68.197
IP_ADDR_MAX=10.16.68.197
CYCLES_NUM=-1
URLS_NUM=1
########### URL SECTION ####################################
URL=ftp://anonymous:joe%040@10.16.68.213/apache/httpd/httpd-2.0.59.tar.bz2
FRESH_CONNECT=1
URL_SHORT_NAME="small"
TIMER_URL_COMPLETION = 0 # In msec. When positive, Now it is enforced
by cancelling url fetch on timeout
TIMER_AFTER_URL_SLEEP =5000
TIMER_TCP_CONN_SETUP=50
and was invoked with this script (as root user):
#! /sbin/bash
rm -fr ftpsmall.*
ulimit -d unlimited
ulimit -f unlimited
ulimit -m unlimited
ulimit -n 19999
ulimit -t unlimited
ulimit -u unlimited
ulimit -v unlimited
ulimit -x unlimited
echo 1 > /proc/sys/net/ipv4/tcp_tw_recycle
echo 1 > /proc/sys/net/ipv4/tcp_tw_reuse
echo 1 > /proc/sys/net/ipv4/tcp_ecn
echo 1 > /proc/sys/net/ipv4/tcp_window_scaling
curl-loader -f ftp.small -v -u -w
With the limit set to 50 on the client and the delays, I expect having
the server set to 100 should be enough to let this script run without
errors, but it never does. What am I doing wrong? I can provide more
data, but I need to know what data will help you. Please let me know.
I didn't attach the entire curl-loader log since even for a few
minute run it was 38MB.
JGH
On 8/5/07, Robert Iakobashvili <cor...@gm...> wrote:
> On 8/2/07, BuraphaLinux Server <bur...@gm...> wrote:
> >
> > On 8/2/07, Robert Iakobashvili <cor...@gm...> wrote:
> > >
> > > My understanding here is that, when FRESH_CONNECT is set,
> > > we need:
> > > 1) to close the connection;
> > > 2) only after the connection is closed to sleep for
> > TIMER_AFTER_URL_SLEEP;
> >
> > Yes, you understand the problem and what the solution is. I look
> > forward to testing your fix. Thanks!
> >
> > JGH
> >
>
> Please, try the latest svn version.
> Unfortunately, I am short in time to test it properly and only
> run a small test.
> Please, test, if it works for you.
>
> --
> Sincerely,
> Robert Iakobashvili,
> coroberti %x40 gmail %x2e com
> ...........................................................
> http://curl-loader.sourceforge.net
> A web testing and traffic generation tool.
>
|
|
From: Robert I. <cor...@gm...> - 2007-08-05 12:15:28
|
On 8/5/07, BuraphaLinux Server <bur...@gm...> wrote: > > Hello, > > In testing I did not see any difference. In the logs I get blocks > like this when curl-loader is cycling too fast (sorry gmail wraps > badly): The difference is when FRESH_CONNECT=1 and sleep time positive curl-loader is supposed to close its connection to server, go to sleep for the sleep timeout, and re-establish the connection after sleeping. If you will increase the sleeping time, you can hopefully observe the behavior in sniffer. : eff-url: > ftp://anonymous:joe%040@10.16.68.213/apache/httpd/httpd-2.0.59.tar.bz2, > url: > 121 20 (10.16.68.197) <= Recv header: eff-url: > ftp://anonymous:joe%040@10.16.68.213/apache/httpd/httpd-2.0.59.tar.bz2, > url: > 121 20 (10.16.68.197) :<= WARNING: parsing error: wrong response code > (FTP?) 0 . > 121 20 (10.16.68.197) !! ERROR: This doesn't seem like a nice > ftp-server response > : eff-url: > ftp://anonymous:joe%040@10.16.68.213/apache/httpd/httpd-2.0.59.tar.bz2, > url: > 121 20 (10.16.68.197) :== Info: Expire cleared > : eff-url: > ftp://anonymous:joe%040@10.16.68.213/apache/httpd/httpd-2.0.59.tar.bz2, > url: > 121 20 (10.16.68.197) :== Info: Closing connection #117 > > I worry that 'parsing error' means it doesn't do the delay, but I > don't really know. it's not the issue. The server side of this will say this in the vsftpd log file: > > Sun Aug 5 18:17:58 2007 [pid 10839] CONNECT: Client "10.16.68.197", > "Connection refused: too many sessions." > > Using ss -tan (or netstat) I can verify there are too many connections > on the server. > The vsftpd server has been set for 100 clients with these entries in > vsftpd.conf: > > # > # Maximum number of simultaneous clients > # > max_clients=100 > # > # Maximum number of connections from 1 ip > # > max_per_ip=100 > > (and I painfully tested by hand that this limit works as expected) > > My configuration is ftp.small for curl-loader and is this: > > ########### GENERAL SECTION ################################ > > BATCH_NAME= ftpsmall > CLIENTS_NUM_MAX=50 # Same as CLIENTS_NUM > CLIENTS_NUM_START=5 > CLIENTS_RAMPUP_INC=5 > INTERFACE=eth0 > NETMASK=32 > IP_ADDR_MIN=10.16.68.197 > IP_ADDR_MAX=10.16.68.197 > CYCLES_NUM=-1 > URLS_NUM=1 > > ########### URL SECTION #################################### > > URL=ftp://anonymous:joe%040@10.16.68.213/apache/httpd/httpd-2.0.59.tar.bz2 > FRESH_CONNECT=1 > URL_SHORT_NAME="small" > TIMER_URL_COMPLETION = 0 # In msec. When positive, Now it is enforced > by cancelling url fetch on timeout > TIMER_AFTER_URL_SLEEP =5000 > TIMER_TCP_CONN_SETUP=50 Looks good, but I would make TIMER_TCP_CONN_SETUP not larger than 10 seconds. and was invoked with this script (as root user): > > #! /sbin/bash > rm -fr ftpsmall.* > ulimit -d unlimited > ulimit -f unlimited > ulimit -m unlimited > ulimit -n 19999 > ulimit -t unlimited > ulimit -u unlimited > ulimit -v unlimited > ulimit -x unlimited > echo 1 > /proc/sys/net/ipv4/tcp_tw_recycle > echo 1 > /proc/sys/net/ipv4/tcp_tw_reuse > echo 1 > /proc/sys/net/ipv4/tcp_ecn > echo 1 > /proc/sys/net/ipv4/tcp_window_scaling > curl-loader -f ftp.small -v -u -w > > With the limit set to 50 on the client and the delays, OK. Do you see connections closed by client (curl-loader) in network (sniffer wireshark/ethereal capture with a single curl-loader client could be of assistance) and re-established after 5 seconds, or the connections are remaining stalled? We are calling libcurl function, which is supposed to release the connection, but if this is buggy, thus, we can dig into the issue. Another option is that the release by FTP could take some time. Could you try a larger timeout, like 10000, 15 000? Sincerely, Robert Iakobashvili, coroberti %x40 gmail %x2e com ........................................................... http://curl-loader.sourceforge.net A web testing and traffic generation tool. |
|
From: BuraphaLinux S. <bur...@gm...> - 2007-08-05 12:28:08
|
Hello,
I will get the wireshark trace and post a URL for you to download
it. I can do it 3 times, once with 5000, once with 10000, and once
with 15000. I will change TIMER_TCP_CONN_SETUP to 10. It will take
a while to get the data; I am starting now. I will save all log files
from the run on the client and server so you can look at them too.
JGH
On 8/5/07, Robert Iakobashvili <cor...@gm...> wrote:
> On 8/5/07, BuraphaLinux Server <bur...@gm...> wrote:
> >
> > Hello,
> >
> > In testing I did not see any difference. In the logs I get blocks
> > like this when curl-loader is cycling too fast (sorry gmail wraps
> > badly):
>
>
> The difference is when FRESH_CONNECT=1 and sleep time positive
> curl-loader is supposed to close its connection to server, go to sleep
> for the sleep timeout, and re-establish the connection after sleeping.
>
> If you will increase the sleeping time, you can hopefully observe the
> behavior in sniffer.
>
>
> : eff-url:
> > ftp://anonymous:joe%040@10.16.68.213/apache/httpd/httpd-2.0.59.tar.bz2,
> > url:
> > 121 20 (10.16.68.197) <= Recv header: eff-url:
> > ftp://anonymous:joe%040@10.16.68.213/apache/httpd/httpd-2.0.59.tar.bz2,
> > url:
> > 121 20 (10.16.68.197) :<= WARNING: parsing error: wrong response code
> > (FTP?) 0 .
> > 121 20 (10.16.68.197) !! ERROR: This doesn't seem like a nice
> > ftp-server response
> > : eff-url:
> > ftp://anonymous:joe%040@10.16.68.213/apache/httpd/httpd-2.0.59.tar.bz2,
> > url:
> > 121 20 (10.16.68.197) :== Info: Expire cleared
> > : eff-url:
> > ftp://anonymous:joe%040@10.16.68.213/apache/httpd/httpd-2.0.59.tar.bz2,
> > url:
> > 121 20 (10.16.68.197) :== Info: Closing connection #117
> >
> > I worry that 'parsing error' means it doesn't do the delay, but I
> > don't really know.
>
>
> it's not the issue.
>
> The server side of this will say this in the vsftpd log file:
> >
> > Sun Aug 5 18:17:58 2007 [pid 10839] CONNECT: Client "10.16.68.197",
> > "Connection refused: too many sessions."
> >
> > Using ss -tan (or netstat) I can verify there are too many connections
> > on the server.
> > The vsftpd server has been set for 100 clients with these entries in
> > vsftpd.conf:
> >
> > #
> > # Maximum number of simultaneous clients
> > #
> > max_clients=100
> > #
> > # Maximum number of connections from 1 ip
> > #
> > max_per_ip=100
> >
> > (and I painfully tested by hand that this limit works as expected)
> >
> > My configuration is ftp.small for curl-loader and is this:
> >
> > ########### GENERAL SECTION ################################
> >
> > BATCH_NAME= ftpsmall
> > CLIENTS_NUM_MAX=50 # Same as CLIENTS_NUM
> > CLIENTS_NUM_START=5
> > CLIENTS_RAMPUP_INC=5
> > INTERFACE=eth0
> > NETMASK=32
> > IP_ADDR_MIN=10.16.68.197
> > IP_ADDR_MAX=10.16.68.197
> > CYCLES_NUM=-1
> > URLS_NUM=1
> >
> > ########### URL SECTION ####################################
> >
> > URL=ftp://anonymous:joe%040@10.16.68.213/apache/httpd/httpd-2.0.59.tar.bz2
> > FRESH_CONNECT=1
> > URL_SHORT_NAME="small"
> > TIMER_URL_COMPLETION = 0 # In msec. When positive, Now it is enforced
> > by cancelling url fetch on timeout
> > TIMER_AFTER_URL_SLEEP =5000
> > TIMER_TCP_CONN_SETUP=50
>
>
> Looks good, but I would make TIMER_TCP_CONN_SETUP not larger than 10
> seconds.
>
> and was invoked with this script (as root user):
> >
> > #! /sbin/bash
> > rm -fr ftpsmall.*
> > ulimit -d unlimited
> > ulimit -f unlimited
> > ulimit -m unlimited
> > ulimit -n 19999
> > ulimit -t unlimited
> > ulimit -u unlimited
> > ulimit -v unlimited
> > ulimit -x unlimited
> > echo 1 > /proc/sys/net/ipv4/tcp_tw_recycle
> > echo 1 > /proc/sys/net/ipv4/tcp_tw_reuse
> > echo 1 > /proc/sys/net/ipv4/tcp_ecn
> > echo 1 > /proc/sys/net/ipv4/tcp_window_scaling
> > curl-loader -f ftp.small -v -u -w
> >
> > With the limit set to 50 on the client and the delays,
>
>
>
> OK.
>
> Do you see connections closed by client (curl-loader)
> in network (sniffer wireshark/ethereal capture with a single curl-loader
> client could be of
> assistance) and re-established after 5 seconds,
> or the connections are remaining stalled?
>
> We are calling libcurl function, which is supposed to release the
> connection,
> but if this is buggy, thus, we can dig into the issue.
>
> Another option is that the release by FTP could take some time.
> Could you try a larger timeout, like 10000, 15 000?
>
> Sincerely,
> Robert Iakobashvili,
> coroberti %x40 gmail %x2e com
> ...........................................................
> http://curl-loader.sourceforge.net
> A web testing and traffic generation tool.
>
|
|
From: Robert I. <cor...@gm...> - 2007-08-05 12:41:41
|
On 8/5/07, BuraphaLinux Server <bur...@gm...> wrote: > > Hello, > > I will get the wireshark trace and post a URL for you to download > it. I can do it 3 times, once with 5000, once with 10000, and once > with 15000. I will change TIMER_TCP_CONN_SETUP to 10. It will take > a while to get the data; I am starting now. I will save all log files > from the run on the client and server so you can look at them too. > > JGH > 5000 and 15000 for a single client are enough. Thanks. Sincerely, Robert Iakobashvili, coroberti %x40 gmail %x2e com ........................................................... http://curl-loader.sourceforge.net A web testing and traffic generation tool. |
|
From: BuraphaLinux S. <bur...@gm...> - 2007-08-05 13:29:06
|
On 8/5/07, Robert Iakobashvili <cor...@gm...> wrote: > On 8/5/07, BuraphaLinux Server <bur...@gm...> wrote: > > Hello, > > > > I will get the wireshark trace and post a URL for you to download > > it. I can do it 3 times, once with 5000, once with 10000, and once > > with 15000. I will change TIMER_TCP_CONN_SETUP to 10. It will take > > a while to get the data; I am starting now. I will save all log files > > from the run on the client and server so you can look at them too. > > > > JGH > > 5000 and 15000 for a single client are enough. > Thanks. Sincerely, Robert Iakobashvili I finished before I got your email. The wireshark trace files are truly huge. I compressed them, but they don't compress much. They are hundreds of MB. I can burn some CD-ROMs and send it to you air mail if it's too big to download. Everything (including the source I used for curl-loader and vsftpd and the build scripts for them, the 3 test runs, client and server logs, and the huge wireshark stuff) is here: http://www.buraphalinux.org/download/curl_loader/ JGH |
|
From: Robert I. <cor...@gm...> - 2007-08-05 14:07:50
|
On 8/5/07, BuraphaLinux Server <bur...@gm...> wrote: > > On 8/5/07, Robert Iakobashvili <cor...@gm...> wrote: > > On 8/5/07, BuraphaLinux Server <bur...@gm...> wrote: > > > Hello, > > > > > > I will get the wireshark trace and post a URL for you to download > > > it. I can do it 3 times, once with 5000, once with 10000, and once > > > with 15000. I will change TIMER_TCP_CONN_SETUP to 10. It will take > > > a while to get the data; I am starting now. I will save all log files > > > from the run on the client and server so you can look at them too. > > > > > > JGH > > > > 5000 and 15000 for a single client are enough. > > Thanks. Sincerely, Robert Iakobashvili > > I finished before I got your email. Sorry. The wireshark trace files are > truly huge. I compressed them, but they don't compress much. They > are hundreds of MB. I can burn some CD-ROMs and send it to you air > mail if it's too big to download. > Could you, please, place to your ftp-server a small file of 100-500 K size and run the curl-load to fetch the file? Thanks. What I would like to see its just TCP setup/closure and FTP-setup closure, not the huge files body flow. Y may first wish to see, however, that you are reproducing the phenomena with such a small file. By the way, what are the sizes of your files? Sincerely, Robert Iakobashvili, coroberti %x40 gmail %x2e com ........................................................... http://curl-loader.sourceforge.net A web testing and traffic generation tool. |
|
From: Robert I. <cor...@gm...> - 2007-08-05 15:22:39
|
On 8/5/07, Robert Iakobashvili <cor...@gm...> wrote: > > > Could you, please, place to your ftp-server a small file of 100-500 K > size and > run the curl-load to fetch the file? Thanks. > > What I would like to see its just TCP setup/closure and FTP-setup closure, > > not the huge files body flow. > > Y may first wish to see, however, that you are reproducing the phenomena > with such a small file. > > By the way, what are the sizes of your files? I can reproduce the problem. Please, dn't spend your time on the above. We have requested libcurl forum for advice. -- Sincerely, Robert Iakobashvili, coroberti %x40 gmail %x2e com ........................................................... http://curl-loader.sourceforge.net A web testing and traffic generation tool. |
|
From: BuraphaLinux S. <bur...@gm...> - 2007-08-05 16:33:14
|
Hello,
I'm sorry to be so much trouble. I will wait for further
instructions when you have something new for me to test. And that
time I'll use a 200K file instead as you have requested. The test
file in this case was about 4.5MB. Now that you can reproduce it you
know the bug is real at least :-)
JGH
On 8/5/07, Robert Iakobashvili <cor...@gm...> wrote:
> On 8/5/07, Robert Iakobashvili <cor...@gm...> wrote:
> >
> >
> > Could you, please, place to your ftp-server a small file of 100-500 K
> > size and
> > run the curl-load to fetch the file? Thanks.
> >
> > What I would like to see its just TCP setup/closure and FTP-setup closure,
> >
> > not the huge files body flow.
> >
> > Y may first wish to see, however, that you are reproducing the phenomena
> > with such a small file.
> >
> > By the way, what are the sizes of your files?
>
>
>
> I can reproduce the problem. Please, dn't spend your time on the above.
>
> We have requested libcurl forum for advice.
>
> --
> Sincerely,
> Robert Iakobashvili,
> coroberti %x40 gmail %x2e com
> ...........................................................
> http://curl-loader.sourceforge.net
> A web testing and traffic generation tool.
>
|
|
From: Robert I. <cor...@gm...> - 2007-08-05 16:45:03
|
On 8/5/07, BuraphaLinux Server <bur...@gm...> wrote: > > I'm sorry to be so much trouble. No chance. We are supporting the code. I will wait for further > instructions when you have something new for me to test. > Y, please, wait, thanks. Sincerely, Robert Iakobashvili, coroberti %x40 gmail %x2e com ........................................................... http://curl-loader.sourceforge.net A web testing and traffic generation tool. |
|
From: Robert I. <cor...@gm...> - 2007-08-06 05:40:31
|
On 8/5/07, BuraphaLinux Server <bur...@gm...> wrote: > > I will wait for further > instructions when you have something new for me to test. > It requires development of some libcurl feature/patches, that seems to be a development of a rather large scale. Hopefully, I could do it in September, but without engagement, sorry. Sincerely, Robert Iakobashvili, coroberti %x40 gmail %x2e com ........................................................... http://curl-loader.sourceforge.net A web testing and traffic generation tool. |
|
From: BuraphaLinux S. <bur...@gm...> - 2007-08-06 05:49:54
|
Hello,
It appears that libcurl lacks good ftp support if you use more
than one connection. I understand that you are not eager to write
your own FTP client library and fixing theirs would be quite labor
intensive. I will try to get this job done with shell scripting for
now, but when (if?) libcurl is fixed so you can fix curl-loader, I
will be ready to do full testing for you at that time. For now,
perhaps the man page needs to be updated to document this issue so
nobody else attempts to do it? If that is ok, I can send a patch to
update the man page (including stating that it's a libcurl limitation,
not a curl-loader limitation).
JGH
On 8/6/07, Robert Iakobashvili <cor...@gm...> wrote:
> On 8/5/07, BuraphaLinux Server <bur...@gm...> wrote:
> >
> > I will wait for further
> > instructions when you have something new for me to test.
> >
>
> It requires development of some libcurl feature/patches, that seems to
> be a development of a rather large scale.
> Hopefully, I could do it in September, but without engagement, sorry.
>
> Sincerely,
> Robert Iakobashvili,
> coroberti %x40 gmail %x2e com
> ...........................................................
> http://curl-loader.sourceforge.net
> A web testing and traffic generation tool.
>
|
|
From: Robert I. <cor...@gm...> - 2007-08-06 05:55:18
|
On 8/6/07, BuraphaLinux Server <bur...@gm...> wrote: > > Hello, > > It appears that libcurl lacks good ftp support if you use more > than one connection. I understand that you are not eager to write > your own FTP client library and fixing theirs would be quite labor > intensive. I will try to get this job done with shell scripting for > now, but when (if?) libcurl is fixed so you can fix curl-loader, I > will be ready to do full testing for you at that time. Thanks for your understanding. For now, perhaps the man page needs to be updated to document this issue > so > nobody else attempts to do it? If that is ok, I can send a patch to > update the man page (including stating that it's a libcurl limitation, > not a curl-loader limitation). > Thanks, I can commit it to svn. Well, we hope to correct it in a two month period. -- Sincerely, Robert Iakobashvili, coroberti %x40 gmail %x2e com ........................................................... http://curl-loader.sourceforge.net A web testing and traffic generation tool. |
|
From: Robert I. <cor...@gm...> - 2007-08-06 07:17:31
|
On 8/6/07, BuraphaLinux Server <bur...@gm...> wrote: > > Hello, > > It appears that libcurl lacks good ftp support if you use more > than one connection. > I have tried a one more bit CURLOPT_FORBID_REUSE as Daniel recommended, and it now closes old FTP connections for me. Could you, please, give a try for the latest svn? Thanks. Sincerely, Robert Iakobashvili, coroberti %x40 gmail %x2e com ........................................................... http://curl-loader.sourceforge.net A web testing and traffic generation tool. |
|
From: BuraphaLinux S. <bur...@gm...> - 2007-08-06 09:50:20
|
Hello,
The change is good. I still get a few failures, but it doesn't
degrade like before. Thank you so much for the fast fix! I had just
finished the man page patch and was going to reply to your last email
with it when I got this message. My failure rate is now between 2 and
3% for 50 connections with 100 connection server limit. Things are
still very bad for 100 connections with 100 connection server limit
however. I will try to devise a small test case with small wireshark
traces for you to look at when you get a chance later. For now I can
work with just running at 1/2 capacity for my testing.
JGH
On 8/6/07, Robert Iakobashvili <cor...@gm...> wrote:
> On 8/6/07, BuraphaLinux Server <bur...@gm...> wrote:
> >
> > Hello,
> >
> > It appears that libcurl lacks good ftp support if you use more
> > than one connection.
> >
>
> I have tried a one more bit CURLOPT_FORBID_REUSE as Daniel recommended, and
> it now closes old FTP connections for me.
> Could you, please, give a try for the latest svn?
> Thanks.
>
>
> Sincerely,
> Robert Iakobashvili,
> coroberti %x40 gmail %x2e com
> ...........................................................
> http://curl-loader.sourceforge.net
> A web testing and traffic generation tool.
>
|
|
From: Robert I. <cor...@gm...> - 2007-08-06 11:12:53
|
On 8/6/07, BuraphaLinux Server <bur...@gm...> wrote: > > Hello, > > The change is good. I still get a few failures, but it doesn't > degrade like before. Thank you so much for the fast fix! I had just > finished the man page patch and was going to reply to your last email > with it when I got this message. My failure rate is now between 2 and > 3% for 50 connections with 100 connection server limit. Things are > still very bad for 100 connections with 100 connection server limit > however. I will try to devise a small test case with small wireshark > traces for you to look at when you get a chance later. For now I can > work with just running at 1/2 capacity for my testing. > Each FTP session has a control and a data connections. What happens is the following: a) an established control FTP-connection is closed immediately just after a file transfer is accomplished; b) an established data FTP-connection is not closed after transfer, is kept during sleeping time and goes closed only after a new data connection is opened; Therefore, it might be, that you can test with a 1/2 capacity due to the described above behavior. Hope, this is a temporary workaround for you. Thanks for your testing. Sincerely, Robert Iakobashvili, coroberti %x40 gmail %x2e com ........................................................... http://curl-loader.sourceforge.net A web testing and traffic generation tool. |