curl-loader-devel Mailing List for curl-loader - web application testing (Page 36)
Status: Alpha
Brought to you by:
coroberti
You can subscribe to this list here.
2006 |
Jan
|
Feb
|
Mar
|
Apr
|
May
|
Jun
|
Jul
|
Aug
|
Sep
|
Oct
|
Nov
(1) |
Dec
|
---|---|---|---|---|---|---|---|---|---|---|---|---|
2007 |
Jan
|
Feb
(1) |
Mar
(7) |
Apr
(19) |
May
(25) |
Jun
(16) |
Jul
(59) |
Aug
(29) |
Sep
(18) |
Oct
(19) |
Nov
(7) |
Dec
(29) |
2008 |
Jan
(6) |
Feb
(18) |
Mar
(8) |
Apr
(27) |
May
(26) |
Jun
(5) |
Jul
(6) |
Aug
|
Sep
(9) |
Oct
(37) |
Nov
(61) |
Dec
(17) |
2009 |
Jan
(21) |
Feb
(25) |
Mar
(4) |
Apr
(2) |
May
(8) |
Jun
(15) |
Jul
(18) |
Aug
(23) |
Sep
(10) |
Oct
(16) |
Nov
(14) |
Dec
(22) |
2010 |
Jan
(23) |
Feb
(8) |
Mar
(18) |
Apr
(1) |
May
(34) |
Jun
(23) |
Jul
(11) |
Aug
(1) |
Sep
(13) |
Oct
(10) |
Nov
(2) |
Dec
(8) |
2011 |
Jan
|
Feb
(7) |
Mar
(24) |
Apr
(12) |
May
(3) |
Jun
(2) |
Jul
(2) |
Aug
|
Sep
(5) |
Oct
(20) |
Nov
(7) |
Dec
(11) |
2012 |
Jan
(12) |
Feb
(5) |
Mar
(16) |
Apr
(3) |
May
|
Jun
(5) |
Jul
(12) |
Aug
(6) |
Sep
|
Oct
|
Nov
(8) |
Dec
|
2013 |
Jan
(1) |
Feb
(3) |
Mar
(5) |
Apr
(3) |
May
(1) |
Jun
|
Jul
(1) |
Aug
(2) |
Sep
(9) |
Oct
|
Nov
(8) |
Dec
(4) |
2014 |
Jan
(4) |
Feb
|
Mar
(1) |
Apr
|
May
(1) |
Jun
|
Jul
|
Aug
|
Sep
(4) |
Oct
|
Nov
(11) |
Dec
(5) |
2015 |
Jan
(1) |
Feb
|
Mar
(11) |
Apr
(3) |
May
(1) |
Jun
(1) |
Jul
(4) |
Aug
(1) |
Sep
(7) |
Oct
(4) |
Nov
(2) |
Dec
|
2016 |
Jan
|
Feb
(1) |
Mar
|
Apr
|
May
|
Jun
|
Jul
(1) |
Aug
|
Sep
|
Oct
|
Nov
|
Dec
|
2018 |
Jan
|
Feb
|
Mar
|
Apr
|
May
(1) |
Jun
|
Jul
|
Aug
|
Sep
|
Oct
|
Nov
|
Dec
|
2020 |
Jan
|
Feb
|
Mar
|
Apr
(1) |
May
|
Jun
|
Jul
|
Aug
|
Sep
|
Oct
|
Nov
|
Dec
|
2022 |
Jan
(1) |
Feb
|
Mar
|
Apr
|
May
|
Jun
|
Jul
|
Aug
|
Sep
|
Oct
|
Nov
|
Dec
|
From: BuraphaLinux S. <bur...@gm...> - 2007-08-06 05:49:54
|
Hello, It appears that libcurl lacks good ftp support if you use more than one connection. I understand that you are not eager to write your own FTP client library and fixing theirs would be quite labor intensive. I will try to get this job done with shell scripting for now, but when (if?) libcurl is fixed so you can fix curl-loader, I will be ready to do full testing for you at that time. For now, perhaps the man page needs to be updated to document this issue so nobody else attempts to do it? If that is ok, I can send a patch to update the man page (including stating that it's a libcurl limitation, not a curl-loader limitation). JGH On 8/6/07, Robert Iakobashvili <cor...@gm...> wrote: > On 8/5/07, BuraphaLinux Server <bur...@gm...> wrote: > > > > I will wait for further > > instructions when you have something new for me to test. > > > > It requires development of some libcurl feature/patches, that seems to > be a development of a rather large scale. > Hopefully, I could do it in September, but without engagement, sorry. > > Sincerely, > Robert Iakobashvili, > coroberti %x40 gmail %x2e com > ........................................................... > http://curl-loader.sourceforge.net > A web testing and traffic generation tool. > |
From: Robert I. <cor...@gm...> - 2007-08-06 05:40:31
|
On 8/5/07, BuraphaLinux Server <bur...@gm...> wrote: > > I will wait for further > instructions when you have something new for me to test. > It requires development of some libcurl feature/patches, that seems to be a development of a rather large scale. Hopefully, I could do it in September, but without engagement, sorry. Sincerely, Robert Iakobashvili, coroberti %x40 gmail %x2e com ........................................................... http://curl-loader.sourceforge.net A web testing and traffic generation tool. |
From: Robert I. <cor...@gm...> - 2007-08-06 04:23:00
|
On 8/6/07, Daniel Stenberg <da...@ha...> wrote: > > On Sun, 5 Aug 2007, Robert Iakobashvili wrote: > > > What I see is that neither removing a handle (FTP) from multi-handle nor > > curl_easy_reset() does not close existing FTP connections. > > How it could be done using current libcurl C API? > > It can't! > > In recent libcurls, the particular connections are not even always the > property of a particular easy handle so curl_easy_reset() would not be the > proper function to provide it. > > Remove an easy handle from a multi handle would certainly not be right > either, > I would say. And there as well, the connection is owned by the multi > handle > and not by the individual easy handle after a completed transfer. > > So, I guess we simply need to introduce a way to do it! A guess, that the ball is now on my side, which will require some moons to pass till my expected future free time-slots. :) What is even worse, that setting the bit of FRESH_CONNECT for a handle busy with a FTP does open fresh FTP connections (data and control), but without closing old connections. Thank you. -- Sincerely, Robert Iakobashvili, coroberti %x40 gmail %x2e com ........................................................... http://curl-loader.sourceforge.net A web testing and traffic generation tool. |
From: Robert I. <cor...@gm...> - 2007-08-05 16:45:03
|
On 8/5/07, BuraphaLinux Server <bur...@gm...> wrote: > > I'm sorry to be so much trouble. No chance. We are supporting the code. I will wait for further > instructions when you have something new for me to test. > Y, please, wait, thanks. Sincerely, Robert Iakobashvili, coroberti %x40 gmail %x2e com ........................................................... http://curl-loader.sourceforge.net A web testing and traffic generation tool. |
From: BuraphaLinux S. <bur...@gm...> - 2007-08-05 16:33:14
|
Hello, I'm sorry to be so much trouble. I will wait for further instructions when you have something new for me to test. And that time I'll use a 200K file instead as you have requested. The test file in this case was about 4.5MB. Now that you can reproduce it you know the bug is real at least :-) JGH On 8/5/07, Robert Iakobashvili <cor...@gm...> wrote: > On 8/5/07, Robert Iakobashvili <cor...@gm...> wrote: > > > > > > Could you, please, place to your ftp-server a small file of 100-500 K > > size and > > run the curl-load to fetch the file? Thanks. > > > > What I would like to see its just TCP setup/closure and FTP-setup closure, > > > > not the huge files body flow. > > > > Y may first wish to see, however, that you are reproducing the phenomena > > with such a small file. > > > > By the way, what are the sizes of your files? > > > > I can reproduce the problem. Please, dn't spend your time on the above. > > We have requested libcurl forum for advice. > > -- > Sincerely, > Robert Iakobashvili, > coroberti %x40 gmail %x2e com > ........................................................... > http://curl-loader.sourceforge.net > A web testing and traffic generation tool. > |
From: Robert I. <cor...@gm...> - 2007-08-05 15:22:39
|
On 8/5/07, Robert Iakobashvili <cor...@gm...> wrote: > > > Could you, please, place to your ftp-server a small file of 100-500 K > size and > run the curl-load to fetch the file? Thanks. > > What I would like to see its just TCP setup/closure and FTP-setup closure, > > not the huge files body flow. > > Y may first wish to see, however, that you are reproducing the phenomena > with such a small file. > > By the way, what are the sizes of your files? I can reproduce the problem. Please, dn't spend your time on the above. We have requested libcurl forum for advice. -- Sincerely, Robert Iakobashvili, coroberti %x40 gmail %x2e com ........................................................... http://curl-loader.sourceforge.net A web testing and traffic generation tool. |
From: Robert I. <cor...@gm...> - 2007-08-05 14:07:50
|
On 8/5/07, BuraphaLinux Server <bur...@gm...> wrote: > > On 8/5/07, Robert Iakobashvili <cor...@gm...> wrote: > > On 8/5/07, BuraphaLinux Server <bur...@gm...> wrote: > > > Hello, > > > > > > I will get the wireshark trace and post a URL for you to download > > > it. I can do it 3 times, once with 5000, once with 10000, and once > > > with 15000. I will change TIMER_TCP_CONN_SETUP to 10. It will take > > > a while to get the data; I am starting now. I will save all log files > > > from the run on the client and server so you can look at them too. > > > > > > JGH > > > > 5000 and 15000 for a single client are enough. > > Thanks. Sincerely, Robert Iakobashvili > > I finished before I got your email. Sorry. The wireshark trace files are > truly huge. I compressed them, but they don't compress much. They > are hundreds of MB. I can burn some CD-ROMs and send it to you air > mail if it's too big to download. > Could you, please, place to your ftp-server a small file of 100-500 K size and run the curl-load to fetch the file? Thanks. What I would like to see its just TCP setup/closure and FTP-setup closure, not the huge files body flow. Y may first wish to see, however, that you are reproducing the phenomena with such a small file. By the way, what are the sizes of your files? Sincerely, Robert Iakobashvili, coroberti %x40 gmail %x2e com ........................................................... http://curl-loader.sourceforge.net A web testing and traffic generation tool. |
From: BuraphaLinux S. <bur...@gm...> - 2007-08-05 13:29:06
|
On 8/5/07, Robert Iakobashvili <cor...@gm...> wrote: > On 8/5/07, BuraphaLinux Server <bur...@gm...> wrote: > > Hello, > > > > I will get the wireshark trace and post a URL for you to download > > it. I can do it 3 times, once with 5000, once with 10000, and once > > with 15000. I will change TIMER_TCP_CONN_SETUP to 10. It will take > > a while to get the data; I am starting now. I will save all log files > > from the run on the client and server so you can look at them too. > > > > JGH > > 5000 and 15000 for a single client are enough. > Thanks. Sincerely, Robert Iakobashvili I finished before I got your email. The wireshark trace files are truly huge. I compressed them, but they don't compress much. They are hundreds of MB. I can burn some CD-ROMs and send it to you air mail if it's too big to download. Everything (including the source I used for curl-loader and vsftpd and the build scripts for them, the 3 test runs, client and server logs, and the huge wireshark stuff) is here: http://www.buraphalinux.org/download/curl_loader/ JGH |
From: Robert I. <cor...@gm...> - 2007-08-05 12:41:41
|
On 8/5/07, BuraphaLinux Server <bur...@gm...> wrote: > > Hello, > > I will get the wireshark trace and post a URL for you to download > it. I can do it 3 times, once with 5000, once with 10000, and once > with 15000. I will change TIMER_TCP_CONN_SETUP to 10. It will take > a while to get the data; I am starting now. I will save all log files > from the run on the client and server so you can look at them too. > > JGH > 5000 and 15000 for a single client are enough. Thanks. Sincerely, Robert Iakobashvili, coroberti %x40 gmail %x2e com ........................................................... http://curl-loader.sourceforge.net A web testing and traffic generation tool. |
From: BuraphaLinux S. <bur...@gm...> - 2007-08-05 12:28:08
|
Hello, I will get the wireshark trace and post a URL for you to download it. I can do it 3 times, once with 5000, once with 10000, and once with 15000. I will change TIMER_TCP_CONN_SETUP to 10. It will take a while to get the data; I am starting now. I will save all log files from the run on the client and server so you can look at them too. JGH On 8/5/07, Robert Iakobashvili <cor...@gm...> wrote: > On 8/5/07, BuraphaLinux Server <bur...@gm...> wrote: > > > > Hello, > > > > In testing I did not see any difference. In the logs I get blocks > > like this when curl-loader is cycling too fast (sorry gmail wraps > > badly): > > > The difference is when FRESH_CONNECT=1 and sleep time positive > curl-loader is supposed to close its connection to server, go to sleep > for the sleep timeout, and re-establish the connection after sleeping. > > If you will increase the sleeping time, you can hopefully observe the > behavior in sniffer. > > > : eff-url: > > ftp://anonymous:joe%040@10.16.68.213/apache/httpd/httpd-2.0.59.tar.bz2, > > url: > > 121 20 (10.16.68.197) <= Recv header: eff-url: > > ftp://anonymous:joe%040@10.16.68.213/apache/httpd/httpd-2.0.59.tar.bz2, > > url: > > 121 20 (10.16.68.197) :<= WARNING: parsing error: wrong response code > > (FTP?) 0 . > > 121 20 (10.16.68.197) !! ERROR: This doesn't seem like a nice > > ftp-server response > > : eff-url: > > ftp://anonymous:joe%040@10.16.68.213/apache/httpd/httpd-2.0.59.tar.bz2, > > url: > > 121 20 (10.16.68.197) :== Info: Expire cleared > > : eff-url: > > ftp://anonymous:joe%040@10.16.68.213/apache/httpd/httpd-2.0.59.tar.bz2, > > url: > > 121 20 (10.16.68.197) :== Info: Closing connection #117 > > > > I worry that 'parsing error' means it doesn't do the delay, but I > > don't really know. > > > it's not the issue. > > The server side of this will say this in the vsftpd log file: > > > > Sun Aug 5 18:17:58 2007 [pid 10839] CONNECT: Client "10.16.68.197", > > "Connection refused: too many sessions." > > > > Using ss -tan (or netstat) I can verify there are too many connections > > on the server. > > The vsftpd server has been set for 100 clients with these entries in > > vsftpd.conf: > > > > # > > # Maximum number of simultaneous clients > > # > > max_clients=100 > > # > > # Maximum number of connections from 1 ip > > # > > max_per_ip=100 > > > > (and I painfully tested by hand that this limit works as expected) > > > > My configuration is ftp.small for curl-loader and is this: > > > > ########### GENERAL SECTION ################################ > > > > BATCH_NAME= ftpsmall > > CLIENTS_NUM_MAX=50 # Same as CLIENTS_NUM > > CLIENTS_NUM_START=5 > > CLIENTS_RAMPUP_INC=5 > > INTERFACE=eth0 > > NETMASK=32 > > IP_ADDR_MIN=10.16.68.197 > > IP_ADDR_MAX=10.16.68.197 > > CYCLES_NUM=-1 > > URLS_NUM=1 > > > > ########### URL SECTION #################################### > > > > URL=ftp://anonymous:joe%040@10.16.68.213/apache/httpd/httpd-2.0.59.tar.bz2 > > FRESH_CONNECT=1 > > URL_SHORT_NAME="small" > > TIMER_URL_COMPLETION = 0 # In msec. When positive, Now it is enforced > > by cancelling url fetch on timeout > > TIMER_AFTER_URL_SLEEP =5000 > > TIMER_TCP_CONN_SETUP=50 > > > Looks good, but I would make TIMER_TCP_CONN_SETUP not larger than 10 > seconds. > > and was invoked with this script (as root user): > > > > #! /sbin/bash > > rm -fr ftpsmall.* > > ulimit -d unlimited > > ulimit -f unlimited > > ulimit -m unlimited > > ulimit -n 19999 > > ulimit -t unlimited > > ulimit -u unlimited > > ulimit -v unlimited > > ulimit -x unlimited > > echo 1 > /proc/sys/net/ipv4/tcp_tw_recycle > > echo 1 > /proc/sys/net/ipv4/tcp_tw_reuse > > echo 1 > /proc/sys/net/ipv4/tcp_ecn > > echo 1 > /proc/sys/net/ipv4/tcp_window_scaling > > curl-loader -f ftp.small -v -u -w > > > > With the limit set to 50 on the client and the delays, > > > > OK. > > Do you see connections closed by client (curl-loader) > in network (sniffer wireshark/ethereal capture with a single curl-loader > client could be of > assistance) and re-established after 5 seconds, > or the connections are remaining stalled? > > We are calling libcurl function, which is supposed to release the > connection, > but if this is buggy, thus, we can dig into the issue. > > Another option is that the release by FTP could take some time. > Could you try a larger timeout, like 10000, 15 000? > > Sincerely, > Robert Iakobashvili, > coroberti %x40 gmail %x2e com > ........................................................... > http://curl-loader.sourceforge.net > A web testing and traffic generation tool. > |
From: Robert I. <cor...@gm...> - 2007-08-05 12:15:28
|
On 8/5/07, BuraphaLinux Server <bur...@gm...> wrote: > > Hello, > > In testing I did not see any difference. In the logs I get blocks > like this when curl-loader is cycling too fast (sorry gmail wraps > badly): The difference is when FRESH_CONNECT=1 and sleep time positive curl-loader is supposed to close its connection to server, go to sleep for the sleep timeout, and re-establish the connection after sleeping. If you will increase the sleeping time, you can hopefully observe the behavior in sniffer. : eff-url: > ftp://anonymous:joe%040@10.16.68.213/apache/httpd/httpd-2.0.59.tar.bz2, > url: > 121 20 (10.16.68.197) <= Recv header: eff-url: > ftp://anonymous:joe%040@10.16.68.213/apache/httpd/httpd-2.0.59.tar.bz2, > url: > 121 20 (10.16.68.197) :<= WARNING: parsing error: wrong response code > (FTP?) 0 . > 121 20 (10.16.68.197) !! ERROR: This doesn't seem like a nice > ftp-server response > : eff-url: > ftp://anonymous:joe%040@10.16.68.213/apache/httpd/httpd-2.0.59.tar.bz2, > url: > 121 20 (10.16.68.197) :== Info: Expire cleared > : eff-url: > ftp://anonymous:joe%040@10.16.68.213/apache/httpd/httpd-2.0.59.tar.bz2, > url: > 121 20 (10.16.68.197) :== Info: Closing connection #117 > > I worry that 'parsing error' means it doesn't do the delay, but I > don't really know. it's not the issue. The server side of this will say this in the vsftpd log file: > > Sun Aug 5 18:17:58 2007 [pid 10839] CONNECT: Client "10.16.68.197", > "Connection refused: too many sessions." > > Using ss -tan (or netstat) I can verify there are too many connections > on the server. > The vsftpd server has been set for 100 clients with these entries in > vsftpd.conf: > > # > # Maximum number of simultaneous clients > # > max_clients=100 > # > # Maximum number of connections from 1 ip > # > max_per_ip=100 > > (and I painfully tested by hand that this limit works as expected) > > My configuration is ftp.small for curl-loader and is this: > > ########### GENERAL SECTION ################################ > > BATCH_NAME= ftpsmall > CLIENTS_NUM_MAX=50 # Same as CLIENTS_NUM > CLIENTS_NUM_START=5 > CLIENTS_RAMPUP_INC=5 > INTERFACE=eth0 > NETMASK=32 > IP_ADDR_MIN=10.16.68.197 > IP_ADDR_MAX=10.16.68.197 > CYCLES_NUM=-1 > URLS_NUM=1 > > ########### URL SECTION #################################### > > URL=ftp://anonymous:joe%040@10.16.68.213/apache/httpd/httpd-2.0.59.tar.bz2 > FRESH_CONNECT=1 > URL_SHORT_NAME="small" > TIMER_URL_COMPLETION = 0 # In msec. When positive, Now it is enforced > by cancelling url fetch on timeout > TIMER_AFTER_URL_SLEEP =5000 > TIMER_TCP_CONN_SETUP=50 Looks good, but I would make TIMER_TCP_CONN_SETUP not larger than 10 seconds. and was invoked with this script (as root user): > > #! /sbin/bash > rm -fr ftpsmall.* > ulimit -d unlimited > ulimit -f unlimited > ulimit -m unlimited > ulimit -n 19999 > ulimit -t unlimited > ulimit -u unlimited > ulimit -v unlimited > ulimit -x unlimited > echo 1 > /proc/sys/net/ipv4/tcp_tw_recycle > echo 1 > /proc/sys/net/ipv4/tcp_tw_reuse > echo 1 > /proc/sys/net/ipv4/tcp_ecn > echo 1 > /proc/sys/net/ipv4/tcp_window_scaling > curl-loader -f ftp.small -v -u -w > > With the limit set to 50 on the client and the delays, OK. Do you see connections closed by client (curl-loader) in network (sniffer wireshark/ethereal capture with a single curl-loader client could be of assistance) and re-established after 5 seconds, or the connections are remaining stalled? We are calling libcurl function, which is supposed to release the connection, but if this is buggy, thus, we can dig into the issue. Another option is that the release by FTP could take some time. Could you try a larger timeout, like 10000, 15 000? Sincerely, Robert Iakobashvili, coroberti %x40 gmail %x2e com ........................................................... http://curl-loader.sourceforge.net A web testing and traffic generation tool. |
From: BuraphaLinux S. <bur...@gm...> - 2007-08-05 11:39:15
|
Hello, In testing I did not see any difference. In the logs I get blocks like this when curl-loader is cycling too fast (sorry gmail wraps badly): : eff-url: ftp://anonymous:joe%040@10.16.68.213/apache/httpd/httpd-2.0.59.tar.bz2, url: 121 20 (10.16.68.197) <= Recv header: eff-url: ftp://anonymous:joe%040@10.16.68.213/apache/httpd/httpd-2.0.59.tar.bz2, url: 121 20 (10.16.68.197) :<= WARNING: parsing error: wrong response code (FTP?) 0 . 121 20 (10.16.68.197) !! ERROR: This doesn't seem like a nice ftp-server response : eff-url: ftp://anonymous:joe%040@10.16.68.213/apache/httpd/httpd-2.0.59.tar.bz2, url: 121 20 (10.16.68.197) :== Info: Expire cleared : eff-url: ftp://anonymous:joe%040@10.16.68.213/apache/httpd/httpd-2.0.59.tar.bz2, url: 121 20 (10.16.68.197) :== Info: Closing connection #117 I worry that 'parsing error' means it doesn't do the delay, but I don't really know. The server side of this will say this in the vsftpd log file: Sun Aug 5 18:17:58 2007 [pid 10839] CONNECT: Client "10.16.68.197", "Connection refused: too many sessions." Using ss -tan (or netstat) I can verify there are too many connections on the server. The vsftpd server has been set for 100 clients with these entries in vsftpd.conf: # # Maximum number of simultaneous clients # max_clients=100 # # Maximum number of connections from 1 ip # max_per_ip=100 (and I painfully tested by hand that this limit works as expected) My configuration is ftp.small for curl-loader and is this: ########### GENERAL SECTION ################################ BATCH_NAME= ftpsmall CLIENTS_NUM_MAX=50 # Same as CLIENTS_NUM CLIENTS_NUM_START=5 CLIENTS_RAMPUP_INC=5 INTERFACE=eth0 NETMASK=32 IP_ADDR_MIN=10.16.68.197 IP_ADDR_MAX=10.16.68.197 CYCLES_NUM=-1 URLS_NUM=1 ########### URL SECTION #################################### URL=ftp://anonymous:joe%040@10.16.68.213/apache/httpd/httpd-2.0.59.tar.bz2 FRESH_CONNECT=1 URL_SHORT_NAME="small" TIMER_URL_COMPLETION = 0 # In msec. When positive, Now it is enforced by cancelling url fetch on timeout TIMER_AFTER_URL_SLEEP =5000 TIMER_TCP_CONN_SETUP=50 and was invoked with this script (as root user): #! /sbin/bash rm -fr ftpsmall.* ulimit -d unlimited ulimit -f unlimited ulimit -m unlimited ulimit -n 19999 ulimit -t unlimited ulimit -u unlimited ulimit -v unlimited ulimit -x unlimited echo 1 > /proc/sys/net/ipv4/tcp_tw_recycle echo 1 > /proc/sys/net/ipv4/tcp_tw_reuse echo 1 > /proc/sys/net/ipv4/tcp_ecn echo 1 > /proc/sys/net/ipv4/tcp_window_scaling curl-loader -f ftp.small -v -u -w With the limit set to 50 on the client and the delays, I expect having the server set to 100 should be enough to let this script run without errors, but it never does. What am I doing wrong? I can provide more data, but I need to know what data will help you. Please let me know. I didn't attach the entire curl-loader log since even for a few minute run it was 38MB. JGH On 8/5/07, Robert Iakobashvili <cor...@gm...> wrote: > On 8/2/07, BuraphaLinux Server <bur...@gm...> wrote: > > > > On 8/2/07, Robert Iakobashvili <cor...@gm...> wrote: > > > > > > My understanding here is that, when FRESH_CONNECT is set, > > > we need: > > > 1) to close the connection; > > > 2) only after the connection is closed to sleep for > > TIMER_AFTER_URL_SLEEP; > > > > Yes, you understand the problem and what the solution is. I look > > forward to testing your fix. Thanks! > > > > JGH > > > > Please, try the latest svn version. > Unfortunately, I am short in time to test it properly and only > run a small test. > Please, test, if it works for you. > > -- > Sincerely, > Robert Iakobashvili, > coroberti %x40 gmail %x2e com > ........................................................... > http://curl-loader.sourceforge.net > A web testing and traffic generation tool. > |
From: Robert I. <cor...@gm...> - 2007-08-05 08:36:33
|
On 8/2/07, BuraphaLinux Server <bur...@gm...> wrote: > > On 8/2/07, Robert Iakobashvili <cor...@gm...> wrote: > > > > My understanding here is that, when FRESH_CONNECT is set, > > we need: > > 1) to close the connection; > > 2) only after the connection is closed to sleep for > TIMER_AFTER_URL_SLEEP; > > Yes, you understand the problem and what the solution is. I look > forward to testing your fix. Thanks! > > JGH > Please, try the latest svn version. Unfortunately, I am short in time to test it properly and only run a small test. Please, test, if it works for you. -- Sincerely, Robert Iakobashvili, coroberti %x40 gmail %x2e com ........................................................... http://curl-loader.sourceforge.net A web testing and traffic generation tool. |
From: Robert I. <cor...@gm...> - 2007-08-03 07:52:52
|
FYI, Note, that sourceforge-based curl-loader archives are experiencing periodically problems. If you do not see your messages in archives, sorry, but we are depending on sourceforge. Sincerely, Robert ---------- Forwarded message ---------- From: SourceForge.net <no...@so...> Date: Aug 2, 2007 10:15 PM Subject: [ alexandria-Support Requests-1766422 ] curl-loader-devel mailing list archive do not work To: no...@so... Support Requests item #1766422, was opened at 2007-08-02 12:55 Message generated for change (Comment added) made by wdavison You can respond by visiting: https://sourceforge.net/tracker/?func=detail&atid=200001&aid=1766422&group_id=1 Please note that this message will contain a full copy of the comment thread, including the initial issue submission, for this request, not just the latest update. Category: Project Mailing Lists/Archives/Services >Group: Second Level Support >Status: Closed Priority: 5 Private: No Submitted By: Robert Iakobashvili (coroberti) >Assigned to: Wayne Davison (wdavison) Summary: curl-loader-devel mailing list archive do not work Initial Comment: First issue: curl-loader-devel mailing list archive do net present recent e-mails. The problem has been first observed about 7-10 days ago. The e-mails are re-addressed properly, but are not archived. Please, restore all e-mails in archives and make the archives function properly. Second issue: What should be done in order the archives to be indexed and searchable by google, yahoo, etc? Thank you in advance, Robert ---------------------------------------------------------------------- >Comment By: Wayne Davison (wdavison) Date: 2007-08-02 13:15 Message: Logged In: YES user_id=1546419 Originator: NO Greetings, I fixed a problem with the archive updating, and the list's archive now appears to be up-to-date. SourceForge.net Support ---------------------------------------------------------------------- You can respond by visiting: https://sourceforge.net/tracker/?func=detail&atid=200001&aid=1766422&group_id=1 |
From: BuraphaLinux S. <bur...@gm...> - 2007-08-02 13:11:36
|
On 8/2/07, Robert Iakobashvili <cor...@gm...> wrote: > > On 8/2/07, BuraphaLinux Server <bur...@gm... > wrote: > > Hello, > > Nice hearing from you. > > > FRESH_CONNECT=1 > > TIMER_AFTER_URL_SLEEP=5000 > > > > So what I expect to happen is that after the download and the > > connection is closed, there will be a 5 second pause, then a new > > connection will be opened. Unfortuantely this is not what happens. I > > guess that after the download there is a 5 second pause, then a close > > and reopen almost instantly. > > My understanding here is that, when FRESH_CONNECT is set, > we need: > 1) to close the connection; > 2) only after the connection is closed to sleep for TIMER_AFTER_URL_SLEEP; > > Please, confirm, that I got it. Thanks. > In any case your proposal is "The Right Thing to Do" (myTM) > :) > > > What happens is that after one batch of URLs is fetched, the > > curl-loader goes insane. > > It tries to connect instanatly and goes into an ultra-fast loop doing > > that. Well the ftp server correctly just refuses everything at that > > point since curl-loader is opening hundreds of connections without > > waiting between failures. > > > > It is actually DOS-ing. Hope, that this will not be the primary usage of the > tool. > :) > > I will try next week (Sunday/Monday) to come with this correction and will > inform you, > when committed. > > My best wishes. > > Sincerely, > Robert Iakobashvili, Hello, Yes, you understand the problem and what the solution is. I look forward to testing your fix. Thanks! JGH |
From: Robert I. <cor...@gm...> - 2007-08-02 11:27:09
|
On 8/2/07, BuraphaLinux Server <bur...@gm...> wrote: > > Hello, Nice hearing from you. FRESH_CONNECT=1 > TIMER_AFTER_URL_SLEEP=5000 > > So what I expect to happen is that after the download and the > connection is closed, there will be a 5 second pause, then a new > connection will be opened. Unfortuantely this is not what happens. I > guess that after the download there is a 5 second pause, then a close > and reopen almost instantly. My understanding here is that, when FRESH_CONNECT is set, we need: 1) to close the connection; 2) only after the connection is closed to sleep for TIMER_AFTER_URL_SLEEP; Please, confirm, that I got it. Thanks. In any case your proposal is "The Right Thing to Do" (myTM) :) What happens is that after one batch of URLs is fetched, the > curl-loader goes insane. > It tries to connect instanatly and goes into an ultra-fast loop doing > that. Well the ftp server correctly just refuses everything at that > point since curl-loader is opening hundreds of connections without > waiting between failures. > It is actually DOS-ing. Hope, that this will not be the primary usage of the tool. :) I will try next week (Sunday/Monday) to come with this correction and will inform you, when committed. My best wishes. Sincerely, Robert Iakobashvili, coroberti %x40 gmail %x2e com ........................................................... http://curl-loader.sourceforge.net A web testing and traffic generation tool. |
From: BuraphaLinux S. <bur...@gm...> - 2007-08-02 10:49:37
|
Hello, I have a severe problem using curl-loader with vsftpd. I reported it once before but I guess I didn't supply enough information. I use FRESH_CONNECT=1 and I use TIMER_AFTER_URL_SLEEP=5000 So what I expect to happen is that after the download and the connection is closed, there will be a 5 second pause, then a new connection will be opened. Unfortuantely this is not what happens. I guess that after the download there is a 5 second pause, then a close and reopen almost instantly. The problem is that on my ftp server it takes time to teardown the socket and end the process. Most human users would have a gap while they pick something else and click on it. Automated tools would reuse the connection (no FRESH_CONNECT=1). I am trying to simulate humans hitting the ftp site quickly, but not instantly. I have 'max_clients' set in my vsftpd.conf file to limit the total number of people connecting. What happens is that after one batch of URLs is fetched, the curl-loader goes insane. It tries to connect instanatly and goes into an ultra-fast loop doing that. Well the ftp server correctly just refuses everything at that point since curl-loader is opening hundreds of connections without waiting between failures. The curl-loader does not slow down when it gets notified there are too many connections from the server. I can see parsing that for many different servers may not be practical and I have no problem with that. I do need a configurable pause between connection attempts however. I want to have the system pause after disconnection when I'm using FRESH_CONNECT=1 and I guess we need some new TIMER_BETWEEN_CONNECTIONS or some such. Before I go and code that, I would like to know if it can already be done some other way. I can provide wireshark traces if you need them, ftp logs, etc. I am using curl_loader-0.41. Thank you, JGH |
From: Robert I. <cor...@gm...> - 2007-07-28 06:29:31
|
Hi Artur, On 7/27/07, Artur B <ar...@gm...> wrote: > Robert, next time it may be somebody else, e.g. Michael, who is now in "deep development" of the next generation features. > Another bug I've noticed: PROBLEM_REPORTING form is always the best thing to start with > unrealistically large "To" data rate > reported on the Summary Stats: >Req:10289,1xx:3487,2xx:8139,3xx:1953,4xx:0,5xx:5,Err:134,T-Err:223,D:1086ms,D-2xx:1824ms,Ti:449643B/s,To:124210146B/s Y, it looks like a bug in counting of averages. Thanks for reporting it. Sincerely, Robert Iakobashvili, coroberti %x40 gmail %x2e com ........................................................... http://curl-loader.sourceforge.net A web testing and traffic generation tool. |
From: Artur B <ar...@gm...> - 2007-07-27 20:01:45
|
Robert, Another bug I've noticed: unrealistically large "To" data rate reported on the Summary Stats: ---------------------------------------------------------------------- Interval stats (latest:3 sec, clients:100, CAPS-curr:0): H/F Req:0,1xx:0,2xx:0,3xx:0,4xx:0,5xx:0,Err:0,T-Err:0,D:0ms,D-2xx:0ms,Ti:0B/s,To:0B/s H/F/S Req:80,1xx:26,2xx:57,3xx:20,4xx:0,5xx:0,Err:0,T-Err:0,D:1270ms,D-2xx:2264ms,Ti:492135B/s,To:19977B/s -------------------------------------------------------------------------------- Summary stats (runs:415 secs, CAPS-average:1): H/F Req:0,1xx:0,2xx:0,3xx:0,4xx:0,5xx:0,Err:0,T-Err:0,D:0ms,D-2xx:0ms,Ti:0B/s,To:0B/s H/F/S Req:10289,1xx:3487,2xx:8139,3xx:1953,4xx:0,5xx:5,Err:134,T-Err:223,D:1086ms,D-2xx:1824ms,Ti:449643B/s, *To:124210146B/s * ================================================================================= Manual: clients:max[100],curr[100]. Inc num: [+|*]. Artur |
From: Robert I. <cor...@gm...> - 2007-07-26 18:00:38
|
Nothing special, but we on the map at this most prestigious linux news forum. Please, search the link for curl-loader: http://lwn.net/Articles/241467/ Thanks to all the great people, that encouraged us. Special thanks to Daniel Stenberg (libcurl) and Mark Aberdour (www.opensourcetesting.org). -- Sincerely, Robert Iakobashvili, coroberti %x40 gmail %x2e com ........................................................... http://curl-loader.sourceforge.net A web testing and traffic generation tool. |
From: web l. a. p. t. t. <cur...@li...> - 2007-07-26 11:40:39
|
The version is released due a critical bugfix. Loading of credentials/tokens from file was broken. Thanks to Artur Barashev for fixing it. Besides that curl-loader has been moved to use the latest libcurl 7.16.4 version with a couple our patches the library being in mainline. -- Sincerely, Robert Iakobashvili, coroberti %x40 gmail %x2e com ........................................................... http://curl-loader.sourceforge.net A web testing and traffic generation tool. |
From: web l. a. p. t. t. <cur...@li...> - 2007-07-26 05:43:41
|
On 7/26/07, web loading and performance testing tool < cur...@li...> wrote: > > Robert, > > Just because it may work once or twice doesn't mean it will always work, > the function just may use the exactly same stack space in memory once or > twice. > It could be the case. The bottom line: it wasn't correct, > Agree. But don' t kill me, running with zero bandwidth. :) Sincerely, Robert Iakobashvili, coroberti %x40 gmail %x2e com ........................................................... http://curl-loader.sourceforge.net A web testing and traffic generation tool. |
From: web l. a. p. t. t. <cur...@li...> - 2007-07-26 05:26:21
|
Robert, Just because it may work once or twice doesn't mean it will always work, the function just may use the exactly same stack space in memory once or twice. The bottom line: it wasn't correct, one should never point to local function variables after that function returns. It is correct now though with static declaration of that string array. Also, it would be nice to document the precedence of separator characters used in this code. Currently it's not clear from documentation. Artur On 7/25/07, web loading and performance testing tool < cur...@li...> wrote: > > Hi Artur, > > On 7/25/07, web loading and performance testing tool > > 1) You are welcome, and thank you for creating such a usefull tool. My > full > > name is Artur Barashev. > > Thanks file corrected. > > 2) Did you test your fix? :) > > Yes and the test was corrected as well. > > > It looks that the way you fixed it isn't > > correct, you are saving the separator > > pointer by pointing it to locally allocated string (allocated on the > > function stack). > > No, this is correct. You can test. > Still, added the word static (only for clarity). > > Thanks. It looks like we need to release a new stable version. > > Sincerely, > Robert Iakobashvili, > coroberti %x40 gmail %x2e com > ........................................................... > http://curl-loader.sourceforge.net > A web testing and traffic generation tool. > > ------------------------------------------------------------------------- > This SF.net email is sponsored by: Splunk Inc. > Still grepping through log files to find problems? Stop. > Now Search log events and configuration files using AJAX and a browser. > Download your FREE copy of Splunk now >> http://get.splunk.com/ > _______________________________________________ > curl-loader-devel mailing list > cur...@li... > https://lists.sourceforge.net/lists/listinfo/curl-loader-devel > |
From: web l. a. p. t. t. <cur...@li...> - 2007-07-26 04:18:47
|
Hi Artur, On 7/25/07, web loading and performance testing tool > 1) You are welcome, and thank you for creating such a usefull tool. My full > name is Artur Barashev. Thanks file corrected. > 2) Did you test your fix? :) Yes and the test was corrected as well. > It looks that the way you fixed it isn't > correct, you are saving the separator > pointer by pointing it to locally allocated string (allocated on the > function stack). No, this is correct. You can test. Still, added the word static (only for clarity). Thanks. It looks like we need to release a new stable version. Sincerely, Robert Iakobashvili, coroberti %x40 gmail %x2e com ........................................................... http://curl-loader.sourceforge.net A web testing and traffic generation tool. |
From: web l. a. p. t. t. <cur...@li...> - 2007-07-25 16:32:48
|
Robert, 1) You are welcome, and thank you for creating such a usefull tool. My = full name is Artur Barashev. 2) Did you test your fix? :) It looks that the way you fixed it isn't = correct, you are saving the separator pointer by pointing it to locally allocated string (allocated on the = function stack). If you want your fix to work I suggest you move separators_supported[] string array into global = data space (on top of the file, outside of function). Artur ----- Original Message -----=20 From: web loading and performance testing tool=20 To: cur...@li...=20 Sent: Wednesday, July 25, 2007 3:03 AM Subject: Problem with cookies Arthur, Y are correct. This was broken in 0.32 version. This is a shame on my side to make such a stupid error. My test was not fetching that as well. The matters have been corrected. Thank you. Added you to our THANKS file.=20 If you can provide your surname, it will be added. The patch and commit you can see below. Sincerely, Robert=20 ---------- Forwarded message ---------- From: cor...@us... < = cor...@us...> Date: Jul 25, 2007 12:48 PM Subject: SF.net SVN: curl-loader: [488] trunk/curl-loader To: cur...@li...=20 Revision: 488 = http://curl-loader.svn.sourceforge.net/curl-loader/?rev=3D488&view=3Drev = Author: coroberti=20 Date: 2007-07-25 02:47:58 -0700 (Wed, 25 Jul 2007) Log Message: ----------- Bugfix for loading credentials from file. Modified Paths: -------------- trunk/curl-loader/conf-examples/credentials.cred=20 trunk/curl-loader/conf-examples/post-form-tokens-fr-file.conf trunk/curl-loader/doc/QUICK-START trunk/curl-loader/doc/THANKS trunk/curl-loader/parse_conf.c Modified: trunk/curl-loader/conf-examples/credentials.cred=20 = =3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D= =3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D= =3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D --- trunk/curl-loader/conf-examples/credentials.cred 2007-07-22 = 05:02:20 UTC (rev 487) +++ trunk/curl-loader/conf-examples/credentials.cred 2007-07-25 = 09:47:58 UTC (rev 488)=20 @@ -1,4 +1,3 @@ -# Separator used here is ':' Use %20, when you need a white space -david:meleh -israel:hai -hai:vkayam +# Separator used here is ',' +david,meleh,israel +hai,hai,vkayam=20 Modified: = trunk/curl-loader/conf-examples/post-form-tokens-fr-file.conf = =3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D= =3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D= =3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D --- trunk/curl-loader/conf-examples/post-form-tokens-fr-file.conf = 2007-07-22 05:02:20 UTC (rev 487)=20 +++ trunk/curl-loader/conf-examples/post-form-tokens-fr-file.conf = 2007-07-25 09:47:58 UTC (rev 488) @@ -1,6 +1,6 @@ ########### GENERAL SECTION ################################ BATCH_NAME=3D post-form-tokens-fr-file=20 -CLIENTS_NUM_MAX=3D3 +CLIENTS_NUM_MAX=3D2 INTERFACE=3Deth0 NETMASK=3D24 IP_ADDR_MIN=3D194.90.71.215 Modified: trunk/curl-loader/doc/QUICK-START = =3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D= =3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D= =3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=20 --- trunk/curl-loader/doc/QUICK-START 2007-07-22 05:02:20 UTC (rev = 487) +++ trunk/curl-loader/doc/QUICK-START 2007-07-25 09:47:58 UTC (rev = 488) @@ -16,11 +16,10 @@ files (e.g. crypto.h), export environment variable OPENSSLDIR with=20 the value of that directory. For example: $export OPENSSLDIR=3Dthe-full-path-to-the-directory - -Another known issue is libidn.so, which means, that some linux = distributions -do have some libidn.so.11, but not libidn.so. Resolve it by creating = a softlink. -In some cases, you may be required to remove -lidn from the linking = line, -whereas you need to comment a line in Makefile and uncomment another. + +We are building libcurl with --without-libidn option. Users, that = would like=20 +to resolve IDNA domain names with "international" letters can edit = our +Makefile and use instead --with-libidn=3Dfull-path-dir. Run the following commands from your bash linux shell: $tar zxfv curl-loader-<version>.tar.gz=20 Modified: trunk/curl-loader/doc/THANKS = =3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D= =3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D= =3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D --- trunk/curl-loader/doc/THANKS 2007-07-22 05:02:20 UTC (rev = 487) +++ trunk/curl-loader/doc/THANKS 2007-07-25 09:47:58 UTC (rev = 488)=20 @@ -11,3 +11,4 @@ Aleksandar Lazic <al-...@no...> Jeremy Hicks < je...@no...> John Gatewood Ham < bur...@gm...> +Artur B < ar...@gm...> Modified: trunk/curl-loader/parse_conf.c = =3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D= =3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D= =3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=20 --- trunk/curl-loader/parse_conf.c 2007-07-22 05:02:20 UTC (rev = 487) +++ trunk/curl-loader/parse_conf.c 2007-07-25 09:47:58 UTC (rev = 488) @@ -213,7 +213,7 @@ size_t input_length,=20 form_records_cdata* form_record, size_t record_num, - char* separator); + char** separator);=20 static int add_param_to_batch (char*const input, size_t input_length, @@ -388,17 +388,17 @@ size_t input_len, form_records_cdata* form_record,=20 size_t record_num, - char* separator) + char** separator) { - const char separators_supported [] =3D=20 + const char* separators_supported [] =3D { - ',', - ':', - ';', - ' ', - '@', - '/', - '\0' + ",",=20 + ":", + ";", + " ", + "@", + "/", + 0 }; char* sp =3D NULL; int i; @@ -416,9 +416,9 @@ { for (i =3D 0; separators_supported [i]; i++) { - if ((sp =3D strchr (input, separators_supported [i]))) + if ((sp =3D strchr (input, *separators_supported [i]))) { - *separator =3D *sp; /* Remember the separator */=20 + *separator =3D (char *) separators_supported [i]; /* = Remember the separator */ break; } } @@ -429,9 +429,10 @@ "%s - failed to locate in the first string \"%s\" = \n"=20 "any supported separator.\nThe supported = separators are:\n", __func__, input); + for (i =3D 0; separators_supported [i]; i++) { - fprintf (stderr,"\"%c\"\n", separators_supported [i]);=20 + fprintf (stderr,"\"%s\"\n", separators_supported [i]); } return -1; } @@ -440,9 +441,9 @@ char * token =3D 0, *strtokp =3D 0; size_t token_count =3D 0;=20 - for (token =3D strtok_r (input, separator, &strtokp); + for (token =3D strtok_r (input, *separator, &strtokp); token !=3D 0; - token =3D strtok_r (0, separator, &strtokp)) + token =3D strtok_r (0, *separator, &strtokp))=20 { size_t token_len =3D strlen (token); @@ -2687,7 +2688,7 @@ { char fgets_buff[512]; FILE* fp; - char sep; + char* sep =3D 0; // strtok_r requires a string with '\0' /* Open the file with form records @@ -2737,6 +2738,16 @@ continue; } + if (fgets_buff[string_len - 2] =3D=3D '\r') + { + fgets_buff[string_len - 2] =3D '\0';=20 + } + + if (fgets_buff[string_len -1] =3D=3D '\n') + { + fgets_buff[string_len -1] =3D '\0'; + } + if ((int)url->form_records_num >=3D bctx->client_num_max)=20 { fprintf (stderr, This was sent by the SourceForge.net collaborative development = platform, the world's largest Open Source development site. = -------------------------------------------------------------------------= =20 This SF.net email is sponsored by: Splunk Inc. Still grepping through log files to find problems? Stop. Now Search log events and configuration files using AJAX and a = browser. Download your FREE copy of Splunk now >> http://get.splunk.com/ _______________________________________________ curl-loader-tracker mailing list cur...@li...=20 https://lists.sourceforge.net/lists/listinfo/curl-loader-tracker=20 --=20 Sincerely, Robert Iakobashvili,=20 coroberti %x40 gmail %x2e com ........................................................... http://curl-loader.sourceforge.net=20 A web testing and traffic generation tool.=20 -------------------------------------------------------------------------= ----- = -------------------------------------------------------------------------= This SF.net email is sponsored by: Splunk Inc. Still grepping through log files to find problems? Stop. Now Search log events and configuration files using AJAX and a = browser. Download your FREE copy of Splunk now >> http://get.splunk.com/ -------------------------------------------------------------------------= ----- _______________________________________________ curl-loader-devel mailing list cur...@li... https://lists.sourceforge.net/lists/listinfo/curl-loader-devel |